PA State Reps Spend Snow Day With AI Experts
Today, lawmakers heard deep, different, encouraging and scary perspectives related to AI and everyday life from experts in technology, healthcare, and education.
Earlier today, experts gave deep, different, encouraging and frequently scary perspectives on the impact of Generative AI on everyday life. Experts on technology, healthcare, education and law shared their perspectives and policy recommendations as part of a hearing hosted by Reps. Chris Pielli (D-Chester), Bob Merski (D-Erie) and Jennifer O'Mara (D-Delaware).
Readers know I’ve urged bigger conversations about generative AI beyond “should we use ChatGPT,” and I applaud my PA Representative Chris Pielli for delivering. The teacher and former Capitol Hill staffer in me also awards bonus points to the lawmakers and especially their staff for quickly adapting to a zoom format in response to today’s snowstorm.
At the end of the well-attended and two-hour event, Rep. Pielli explained that this hearing was a “starting point,” and that creating policy around the technology was a “daunting task.”
Synthesizing a two-hour hearing is also a daunting

task, and so I’ve created a simple summary of the industry perspectives presented today. Each section has my impression, the “best quote,” key points, the unanswered question and scariest or more sci-fi moment. Full testimonies of each expert are available here.
Panel 1: Technology
Impact of Technology on the Workforce (Tyler Clark, Microsoft)
Impression: Microsoft has heard, internalized and is now forcefully sharing the “protect human intelligence and jobs” message in public generative AI discussions. At first glance, the company seems to be supporting that message with action. Whether this message is fully supported by organizational policy has yet to emerge, but, last December, Microsoft launched a new partnership with the ALF-CIO (yes, you read that right) to “explore how AI can support worker interests and wellbeing.” Also part of that agreement (not included in Clark’s testimony) is that Microsoft will remain neutral in any future union organizing efforts.
Other key points:
On the pro-social side…Clark’s testimony emphasized Microsoft’s intentional branding efforts of its software “Co-Pilot”as a program designed to work with, not replace, humans. Co-Pilot certainly has a more “we love humans” sounding name than the more robotic moniker Chat Generative Pre-trained Transformer (aka ChatGPT.)
He also emphasized his company’s commitment to a responsible approach to AI noting “AI should be used to empower people and organizations, not to replace them.” Toward that goal, Clark outlined Microsoft’s pro-social AI investments such as AI For Earth , AI for Humanitarian Action Program and Tech Spark which brings technology literacy and opportunities to rural and small-town America.
After repeatedly assuring lawmakers that Microsoft cared about humans, Clark also shared data indicating that employers need AI and employees with AI skills. He discussed a 2023 LinkedIn Future of Work Report which noted: 1) a 75% increase each month since 2023 in members including AI-related terms to their profile; 2) a report that 44 percent of executives will increase AI use in their organization; and 3) a number noting that only 4% of those executives plan to reassess roles and reduce headcount as an impact of AI in their workplace (phew…I thought we were losing jobs…ok, wait, I really want to see the mechanics behind this 4 percent metric…consider me skeptical).
Clark shared a nifty Co-Pilot tutorial during which he showed lawmakers how to create a news release based on the hearing’s agenda page.
Best Soundbite: “The best way to build trust in AI is to have a strong approach to data and privacy policy,” Tyler Clark, Microsoft.
Left unanswered: As a public relations professional, I’m conflicted about Clark’s Co-Pilot tutorial where he demonstrated how AI can replace a common, and frequently boring, task in public relations…creation of a news release. The root of my concern is not discipline protectionism but rather a larger question of who gets to decide what tasks are important and are completed by humans and which ones get regulated to AI. The answer impacts job security and equity.
Lawmakers asked Clark if he knew the number of jobs expected to be eliminated because of AI. Clark did not answer the question but rather pivoted to showcasing the investments Microsoft has made to provide AI job training…and at least one lawmaker noticed that evasion. Other lawmakers expressed appreciation for the investments in career training but noted that these initiatives might be more suited to younger workers thus leaving more seasoned workers out of the AI literacy loop.
Scariest, Most SciFi Moment: As a young political coms person in DC, I’ve covered many a Congressional hearing…usually on aviation and/or transportation. I’ve heard many a repeated question over the years. But the lawmaker question I never had on my bingo card popped up today, which was: “Does AI possess any moral agency?”
No, Clark eventually answered.
Panel 2: AI and Healthcare
Panelists: (Dr. Chandan K. Sen, PhD, FNAI, Director of the McGowan Institute for Regenerative Medicine and Dr. Deeptankar DeMazumder, MD, PhD, Associate Professor of Surgery, McGowan Institute for Regenerative Medicine )
Impression: I’ve argued that communication specialists will have a role educating and assuring publics about generative AI in health care. On one hand, I felt validated on that point by the remarks of Drs. Sen and DeMazumder. On the other, I felt my mind explode as they brought up other “holy moly” topics I had yet to consider.
Best Soundbite: Explaining that academia would be better positioned to address issues of equity, diversity, and access related to AI and healthcare if private industry would ‘butt out’, Dr. Sen said, “We recognize these issues are important, and right now the answers are not available. That leads to the thought that we in academia need to be doing this stuff without the pressure of commercialization. We like working with technology companies. We don’t like working for them. Professors should be able to look at this and call out concerns objectively.”
Key Points:
Dr. Sen noted that next generation AI-powered health care is “a dream come true,” but “only if we do it right.” For Drs. Sen and DeMazumder, doing it right means: 1) educating patients and providers about AI; 2) ensuring AI benefits are not limited to the wealthiest and most tech savvy patients; 3) creating inclusive datasets for AI analysis; 4) protecting privacy of health care data; and 5) preventing unintended consequences such as insurance companies charging more for humans who are identified by an AI algorithm as more expensive based on their previous health history or shopping lists.
Dr. DeMazumder gave an easy-to-understand explanation of AI in healthcare for those of us without advanced degrees in medicine and technology. Specifically, providers can use AI to:
Make better long-term predictions from existing data and thus prescribe preventive action which would avoid long term health consequences as well as save money for patients, organizations, taxpayers and customers.
Provide a clinical and pictorial dashboard allowing providers to better understand disease management and thus prescribe better treatments.
Save time and resources through AI digital-assistants who could answer more basic consumer/patient questions
Use data to quickly understand patients more holistically.
Dr. DeMazumder also explained that part of the solution toward addressing bias in AI health care data sets is to recruit more coders of color. He noted that previous recruitment efforts yielded underwhelming results because future high school coders of color became skeptical once they realized that the company with the job and training offer lacked diversity in its leadership.
However, Dr. DeMazumder explained that his organization found more success once they approached church leaders, explained the importance of reducing bias, and asked these and other community influencers to recruit individuals on their behalf.
Holy Moly Moment: Dr. Sens and DeMazumder explained that generative AI can use “throw away health data” that normally wouldn’t be collected to analyze and predict future health outcomes. First, what is ‘throw away health data’ and can I just keep mine?
Unanswered question: The second panel featured lots of discussion about who is ultimately responsible for outcomes involving AI and health decisions. For example, if an AI-powered diagnosis is wrong because perhaps it pulled from an incomplete data set entered by a coder with bias, and someone gets hurt…who takes the blame? Is it the doctor? The technology company?
I did not hear much discussion about patient recourse in these situations, and I would liked to have heard more about how we educate patients on the risks and benefits of AI in health decisions. Trying to calculate the pros and cons of various AI-powered health interventions is a lot to ask someone who may be facing a life threatening or long-term health decision.
Scariest/Most SciFi Moment: Part of the discussion swerved toward how technology adoption as well as nature versus nurture influences healthcare outcomes. Both panelists noted that younger generations will think differently from their Gen X and Boomer parents when it comes to technology, including AI, in health care. For example, Dr. Sen noted a study that found some toddlers obeyed Alexa’s voice more than mom’s when it came to the simple instruction, “go to bed.”
Panel 3: Education
Panelists: Dr. Richard Burns, Professor of Computer Science West Chester University Samuel Hodge Jr., Professor Fox School of Business, Temple University Michael Soskil, STEM TeacherWallenpaupack South Elementary School
Impression: Alright, as a teacher I’m biased, but these panelists really did well by their fellow educators by noting the larger issues, complexity, benefits, and risks of AI in K-12 and higher education. I found these perspectives refreshing especially given the news media, conflict-based headlines that only focus on AI and cheating or hint that teachers are afraid of AI. And, since I’m a teacher, I’m giving extra best quotes to my fellow educators.
Key Points
Fellow Golden Ram and Chair of WCU’s Department of Computer Science, Dr. Richard Burns positioned his comments within the larger context of our university’s mission of developing the critical and creative abilities of students, strengthening student problem solving, and exercising agility in this era of technology change. Burns noted the frenetic pace of AI and pointed out six major AI developments since the committee announced the hearing last year.
Best Quote: “The good news is there are qualities that differentiate a thinking human from an AI, and these distinctions will be key in an age of rapid change,” Dr. Burns said about the role of higher-ed in AI.
Scariest/Most Sci-Fi Moment: The best is yet to come? Burns also noted that most discussion about generative AI to date has focused on large language models but warned that something called Artificial General Intelligence (AGI) could happen “in our lifetimes.” The risks and benefits of AGI are not fully known, but AGI might make ChatGPT seem cute. The more advanced AGI could reason like a human, solve more complicated problems or make new discoveries. (Wait, does this change the answer in section one to whether or not AI has moral agency?)
Next, law expert Dr. Samuel Hodge from Temple’s Fox School of Business explained that states must not wait for the federal government to protect their citizens and children from deepfake threats.
“The best way of regulating this practice,” he said, “is attaching criminal penalties.” Hodge explained that the law is murky on recourses for individuals who suddenly find themselves the star of a porn movie they did not create because the image is not technically the person’s body. Hodge explained he didn’t agree with that technicality either, but that was the law. However, attaching criminal penalties, amending existing revenge porn laws or stipulating these acts are invasions of privacy are all ways to fight these cyber crimes.
Scariest/Most Sci-Fi Moment: Hodges explained that attaching criminal penalties probably wouldn’t stop the professional child pornographers but would give other would-be fake pornographers pause. I guess that’s a start.
Finally, Michael Soskil, Wallenpaupack South Elementary School Stem Teacher and recipient of the Presidential Award for Excellence in Math and Science teaching, gave his perspectives on AI in K-12 classrooms. Soskil advocated for training and financial support of teachers, safeguarding the human side of education as well as student privacy, and providing an equitable distribution of human teachers (yes, human teachers).
Best Quote: Noting that some school districts in Pennsylvania had already invested in student-facing AI learning programs, Soskil said: “I hate to see machines teaching kids from the most disadvantaged school districts while kids in high resource areas are getting caring, human teachers.”
Soskil said Pennsylvania had an opportunity to lead on generative AI in the classroom to protect teachers and students. He offered three policy recommendations to lawmakers:
Teachers will need resources and training to adapt to AI. However, teachers are also already overburdened so training and resources will not be enough. If lawmakers want K-12 teachers to thrive in the AI era, then the state needs to remove other mandates that have created more work for teachers without clear proof of effectiveness.
AI must not replace human teachers. As noted in Soskil’s quote above, some districts in Pennsylvania have already hired AI programs to teach children when human teachers were not available.
If AI technologies are adopted in the classroom, student privacy protections must exist. Soskil said he worries that bad actors might circumvent security and offer information on these educational AI tools which is designed to radicalize their existing political or religious viewpoints or sell them products.
Soskil also explained his concerns about how AI will further exacerbate student mental health. He also explained he understood lawmaker concerns about generative AI and cheating, but thought that represented the “wrong conversation.” Instead, educators and policy makers must appreciate that “AI is here,” “kids are using it,” and “we will have to change the way we teach.”
Most Scariest/Sci-Fi Moment: AI Teachers in Pa? Already?
Need help solving a communication problem? Contact me at Travis&Co or by email at eryn@travisnco.com


Thank you so much for this. Your executive summary is fantastic and incredibly helpful. I am still left with more concern than promise regarding so many issues like pornography of innocent individuals, equity and diversity among others. I do appreciate the dialogue and efforts of all of these individuals. As Robert Frost would probably say, their woods are lovely, dark and deep. But they have miles to go before they sleep. On a closing note, and here I admit my bias as a crisis communicator, there is no way AI is ready to take over press releases. Not good ones. They will be based off prior examples which are mostly bureaucratic and lack the attention to diversity. Releases need more humanity, not less. Thanks again for illuminating us on what transpired.