Click here to subscribe today or Login.
The Times Leader and WBRE/WYOU have teamed up to take a deep dive into the world of artificial intelligence, NEP-AI. Where did this ‘new’ technology come from? How is it being used in Northeast Pennsylvania? What does the future hold? In a three-part series, we will answer these questions and more. Click here to read part one, click here to read part two, and click here to watch the segment from WBRE/WYOU.
Artificial intelligence is a thriving technology that will be with us for the long haul, and a number of concerns and hopes come with its presence. In part three of NEP-AI, business owners, educators and physicians will lay out the road map for artificial intelligence’s future in Northeast Pennsylvania and beyond.
VizVibe
VizVibe co-founder and CEO Kevin Jones believes that artificial intelligence can contribute greatly across industries, from medicine and manufacturing to travel and tourism.
“It’s so exciting, all the different careers and all the different industries that are able to use it.”
Jones is cautioning the leaders in those industries, however, to think through their AI plan. This is both a financial measure, as well as advice that can be heeded for other technologies that rise in prominence.
“If you’re going to implement something in new technology correctly, you really have to stand back and have a strategic plan, and understand that you need to do it at scale, as opposed to going in and dumping tons of money building your own platform, and you’re going to do what you’re thinking ten years out,” Jones said. “You have to look at what you currently have infrastructure-wise.”
As the implementation of AI expands, Jones acknowledges that some hard truths may need to be realized. For one, his fellow educators (Jones is a professor at Luzerne County Community College) will need to be malleable in their teaching style. The processes and methods that have worked for years might need some altering to match the moment.
Regarding the job market, Jones is sure, one way or the other, that AI will make an impact, including in those industries he anticipates will undergo bigger changes
“It’s going to take jobs. There’s no doubt about it,” Jones said. “But if you upskill yourself, because it’s a new technology, you’re going to be fine.”
Jones elaborated on the idea that jobs will be lost, and, in the process, came to a more nuanced conclusion.
“They’re not going to get lost,” he said in reference to jobs. “They’re just going to get consolidated into higher skill jobs that those same people can have, because training those employees to do that isn’t going to be that much of a step for those companies. You’re actually going to have a higher paying job, possibly, at the place that you like to work. … you’re going to have a new set of skill sets.”
And while Jones is keenly aware of the risks, the shunning of artificial intelligence would represent a massive technological loss.
“We have to do our part to keep an eye on the power of it,” Jones said. “But we can’t be at the point where we kill all of this technology that could do so much good because people are going to use it for bad.”
Fidbak
Julio Pertuz, the CEO and founder of the soccer training app Fidbak, is still cycling through the options he will have when the app’s user base expands. At that point, he will need to use artificial intelligence to serve those users.
“When Fidbak gets to, let’s say, 5,000 users, we’re going to have a problem,” Pertuz said. “We’re going to have more demand. … and that’s when AI comes in.”
In its infant stage, Fidbak’s AI usage will be used to replicate what the app’s coaching base is currently doing – giving feedback to up-and-coming soccer players. The good data that is being established by the coaches will inform the AI going forward.
“For every [soccer] technique, there’s maybe eight ways of doing it wrong. Because of the experience we have teaching these techniques, it becomes a loop,” said Pertuz. “We have answers for these eight ways.”
In short, the automation tool that is eventually used by Fidbak will consume the corrective advice currently being given by the coaches. That human feedback will then be recognized and replicated by the AI tool.
Until that time, the Fidbak brand will continue to expand, as will the good data that will inform Fidbak’s future.
Wilkes University
Dr. Del Lucent, associate professor of physics in the department of mathematics and computer science at Wilkes, will be watching closely as AI continues to build and revolutionize industries.
But he’ll also be keeping a close eye on the risks that, by his estimation, have slipped to the background as AI’s popularity booms.
“I don’t think the dangers that everyone gets worried about are the real dangers. I think Skynet or some sort of AI apocalypse is not going to come about from the AI we have today,” Lucent said. “I think the real problems are problems of intellectual property, and problems of fabrication of information, and problems of privacy, and problems of freedom of speech.”
Those concerns were echoed by Eric Ruggiero, chair of integrative media, art and design at Wilkes. He referenced the Hollywood strikes of 2023, and noted that a particular point of contention during that time surrounded AI voices and likenesses
Dr. Evene Estwick, chair and associate professor of communication and media studies at Wilkes, has tried to take the perspective of the students she teaches, especially as they move into the job market. She has stressed the importance of students becoming “AI literate” in order for them to stay current in the technology landscape.
Those in charge of the AI landscape have an important role to play in making sure that AI is controlled and implemented properly. Estwick projects that holding those leaders accountable, however, is a task that might not be widespread.
“There’s only a small percentage of the population that will be like, ‘oh, we caught you!’ And that will be media critics and media scholars. But the masses, I’m not sure if they really care,” said Estwick.
As for her faith in the people who will have power in the artificial intelligence industry going forward, Estwick is a skeptic.
“I am not very optimistic.”
Geisinger
Dr. Clemens Schirmer’s perspective on AI’s future implementation is balanced, mostly rooted in the belief that new technologies will not be replacing the human element of health care.
“A stethoscope was also a tool that was allowing you to do more things, or better things than before, but it wasn’t something that anyone said, ‘oh, wow, we don’t need physicians anymore,’” said Schirmer.
Schirmer is the vice chair and a professor in the neurosurgery department, the program director of the Geisinger Neurosurgery Residency, and overseer of the interventional stroke ecosystem, at Geisinger Wyoming Valley Medical Center.
That comparison between AI and the stethoscope might be apt. The stethoscope was invented over 200 years ago, and Schirmer sees AI having a long lifespan as well, particularly in the Geisinger system.
“For a really long time, we will use this as a diagnostic or predictive tool that has to be validated by the clinician.”
Regarding the human element, Schirmer offered a precarious, yet not so futuristic, hypothetical, in which an AI tool from a health care provider will communicate with another AI tool used by a different provider. In this scenario, no humans will be involved. Schirmer did not endorse such a scenario, especially in the case of health care matters.
These technology-to-technology moments should not be a concern for patients, at least while Schirmer is involved. He is assured that, while AI can be a bit more prominent in certain fields, some other disciplines, such as health care, will remain removed from a complete AI takeover.
“Health care is probably an area where we’re going to be a lot more lagging and cautious, the same way you’re probably not going to outsource law tomorrow to AI-based algorithms.”
The Wright Center
Dr. Jignesh Sheth, M.D., FACP, MPH, the chief medical and information officer and the senior vice president at The Wright Center for Community Health and Graduate Medical Education, would suggest that AI users, both professional and private, take a closer look at the software they are using.
For example, the AI product built by the trusted medical resource UpToDate tells users where its information came from. Its evidence-based answers are pulled from a worthy database.
Resources that can pull anywhere from the internet are not a source of good information, according to Sheth, especially when it comes to health care. The seriousness of the software should, in essence, reflect the seriousness of the question being asked of it.
“It really goes back to the directory of information that the product is utilizing. If it’s utilizing the internet, it can pull anything, so you have to go back to the source,” said Sheth. “It could be a social media post from you or I. If it was most recent, it will just think that is the real information and present that to you.”
Sheth suggested that different AI-based resources can be used for different purposes. ChatGPT-esque software is probably better suited to help you find a dinner option than it is to give medical advice.
“I think I can use [a lesser artificial intelligence tool] for a restaurant recommendation. I would not use that for a physician or a hospital recommendation,” said Sheth. “A bad meal is okay, but a bad outcome is not okay.”
Sheth touted the Wright Center’s high privacy standards for patient information, noting that their AI programs are off the grid, or not tied to the internet.
“When we use any pilot program, we take a very long time to assess these programs. None of our programs are on the grid. None of it is on the internet. … everything is secure,” Sheth said. “And we are not utilizing any product, even if it makes our life easy, that will jeopardize patient data or employee data. … We definitely treat patient and employee privacy and security with the utmost respect.”
Beyond the Wright Center, Sheth said that the handling of AI is still very much an open question, and one that is being considered on the highest level.
“I know the governments around the world are trying to contain this whole AI issue and the digital usage and privacy issue,” said Sheth. “I don’t think anyone has cracked that problem yet.”