In Conversation with Matthew Stamm, Associate Professor of Electrical and Computer Engineering, Drexel University College of Engineering
At the Philly Builds Bio 3rd Annual Symposium, BioBuzz’s CEO, Chris Frew, interviews Dr. Matthew Stamm, an Associate Professor at Drexel University, about his pioneering research in information forensics and multimedia security. Dr. Stamm leads the Multimedia and Information Security Lab, where he and his team develop techniques to detect multimedia forgeries and study anti-forensic measures.
November 6, 2024
BioBuzz’s mission is simple: to be more connected. Our regionally-focused storytelling, programs, events and experiences connect, amplify and create impact across the life science workforce in growing biohubs. Submit your People On The Move (free and ongoing) to showcase your company’s growth by promoting your new hires and recent promotions. |
Chris Frew [00:00:05]:
Welcome, I’m Chris Frew, CEO of BioBuzz Networks. We’re here today at Philly Builds Bio, the third annual symposium for life sciences. We’re here in the heart of Philadelphia where hundreds of attendees and over 50 expert presenters have come to talk about innovation and Philadelphia’s thriving life science ecosystem. I’m honored today to be joined with Matthew Stamm. Matt, can you introduce yourself?
Matthew Stamm [00:00:28]:
Yeah, hi, my name is Matt Stamm. I’m an associate professor of electrical and computer engineering at Drexel University and an affiliate member of the Department of Computer Science. I’m an AI and machine learning researcher and most of my research lies in an area called Multimedia Forensics which is developing AI and machine learning techniques to detect fake, manipulated and AI-generated media.
Chris Frew [00:00:57]:
Well, that’s exciting. As a media company, I promise we don’t use anything while leaking tick. And this is really Matt Stamm. That’s great, Matt. You just got off a panel. We’re here today talking about a lot about life sciences. I’d love to hear what you think you’re most optimistic or about. With AI and its impact in life sciences. Where do you see the greatest impact having in the short term?
Matthew Stamm [00:01:22]:
Yeah, well, it’s hard to predict where the greatest impact will be because it will have so much cross cutting impact. AI has become this broad term to discuss a lot of different things, but for a long time we’ve been seeing traditional machine learning techniques make substantial advancements. So machine learning are techniques that can kind of ingest data and then make decisions. That’s being used a lot in discovery and diagnosis. We’re seeing techniques like generative AI being used in drug discovery and that in the broader sense this is part of generative AI, things like large language models and multimodal large language models. So things that can take in text but also images, audio recordings. This is affecting basically all knowledge workers. Where these things aren’t really intelligent in the way that they can operate entirely on their own, but they can make knowledge workers extremely effective at accomplishing tasks that in the past took a lot of time. And with the advancement of some new, new kind of customized techniques like rags, you can help query knowledge bases and produce information, just things like text, do media analysis, things like that at scale and accuracy. That was previously extremely challenging. And while there’s a long way to go, this will help people be much more effective in a large number of tasks.
Chris Frew [00:02:56]:
So those are, those are several broad areas. But in general, I mean you really, it sounds like you see that AI and these machine learning advancements are really going to impact the pace of discovery and is it the, the cost of research and development?
Matthew Stamm [00:03:15]:
So I don’t work specifically in that space, so I should be a little bit cautious about what I say. But right now, just in the area of machine learning and artificial intelligence, we’re seeing a rate of progress that’s like akin to quantum physics in the 1930s. There’s just so much exciting information being discovered and generated all the time. This demonstration of basic capabilities are just new ones are showing up by the day. One of the big challenges will be how do we take that demonstration of a general capability and cover that last mile, make accurate, make really strong advances in a particular space. So in like let’s say drug discovery, and again, that’s a space that I don’t work in, we can use generative AI and NLP techniques to help discover new molecules, new protein structures, things like that that might be used in treating different things. Now taking that demonstration of a capability and turning it into something that can cause it’s something that can be operationalized, that’s a really challenging research problem. But the fact that we’re even looking at this kind of shows how much potential there is. Because a decade ago that might have seemed like science fiction. Yeah.
Chris Frew [00:04:43]:
So your work is in information forensics.
Matthew Stamm [00:04:46]:
Yes.
Chris Frew [00:04:47]:
And data integrity. How do you, what is the parallel to life sciences and things like clinical trials or health IT and you know, you know, you know how medicine is delivered. A lot of it’s delivered over, you know, over a video and new ways these days. So maybe you could talk broadly about how some of the work that you do is directly impacting this field.
Matthew Stamm [00:05:09]:
So this is going to touch a lot of areas where information security has traditionally been looking at. So, so the rise of AI and in particular generative AI, it both opens up a couple of new information security problems, but also amplifies a bunch of existing ones that were always around, but maybe took a lot more skill and resources to make an effective attack. So one important example that we’ll see in life sciences, in the past there’s always been manipulation of data in clinical studies, in like research grant applications. I have a colleague working on identifying fake and manipulated western blot images. In the past, you could do this by physically printing out a photo and manipulating it and then taking a new photo. And then Photoshop comes along. You can do a lot of duplication like that. But now we have generative AI and you can make fully synthetic, very realistic looking western blood images or large amounts of data. This hopefully is Not a widespread problem. Most researchers are ethical. However, there’s always a small percentage of unethical operators that can have a tremendous impact on how the field progresses. I do know that HHS and NIH are working to integrate these tools to help suss out fake and manipulated data in grant applications. And then prior to publication, look for evidence of manipulation and generative AI being used in general, as we look at things like remote medicine, Deepfake media has progressed substantially. So that word is used a lot, but its original meaning is just if I take a video of, let’s say me and use generative AI to replace my face with yours, Right? So now I’ve made a fake video of you when I’m puppeteering you. We’ve progressed to the point where we can make real time deepfakes and there are new advances where we may have this basically built in to video conferencing technology because it’s really great from a technological standpoint. But there are important harms where now I can puppeteer you and use this to trick people into giving into disclosing private information, sensitive health information. I know, I am working on this, as are several others in this space. How do we protect against this? Because this has a lot of implications and health is one really important area. Leakage of personally identifiable information or just leakage of sensitive information becomes much more challenging if someone can realistically impersonate someone else. And generative AI has lots of good around it. But this is a really important challenge.
Chris Frew [00:08:20]:
It’s I guess a double-edged sword, as they say, right?
Matthew Stamm [00:08:23]:
Yes. Oh yeah, I was going to say it’s like all-new technology. It can produce tremendous good, but it can also produce lots of malicious uses. The technology itself isn’t inherently good or bad. It’s all in the hands of people using it. So we need to think very intentionally right now about how do we put up guardrails to prevent malicious uses. Because again, the technology isn’t good, but malicious actors now have a new tool.
Chris Frew [00:08:54]:
To do bad things well in general. Like you mentioned earlier, most people working in life science are here to create good for.
Matthew Stamm [00:09:02]:
Yes.
Chris Frew [00:09:03]:
So we have that to fall back on that. We’re in an industry where people are out to seek to do good and help so well. Matt, I really appreciate your time today. This is a really interesting discussion. I’m glad that you were here. Part of the Philly Builds Bio event here to share some of this. I know it’s a hot topic for a lot of people and a lot that many people are still learning about.
Matthew Stamm [00:09:26]:
Well, thank you so much for having me. I really appreciate the opportunity to talk about this.
Chris Frew [00:09:31]:
My pleasure. My name is Chris Frew. I’m the CEO of BioBuzz Networks, and we’re here live today at Philly Builds Bio.
- About the Author
- Latest Posts
BioBuzz is a community led, experience focused, biotech and life sciences media and events company. BioBuzz highlights regional breaking news, industry professionals, jobs, events, and resources for business and career growth. Their weekly newsletter is subscribed to by thousands in the BioHealth Capital Region and Greater Philadelphia as the go-to for industry updates.