Where Validation Meets the Future: KENX Validation University Convenes the Industry in Philadelphia

· 13 min read
Where Validation Meets the Future: KENX Validation University Convenes the Industry in Philadelphia

A two-day intensive at the birthplace of American governance brought together pharmaceutical validation, quality, and compliance leaders to confront an industry at an inflection point – and chart a responsible path forward with AI.

By Nathan Roman

There is a particular kind of tension in a conference room full of validation professionals. Everyone in the room has spent their career building the controls that keep drug products safe and manufacturing processes defensible. They know what the regulations say. They know what auditors ask. And right now, in 2026, they are watching the ground shift beneath frameworks that took decades to build.

That tension filled the Wyndham Historic District in Philadelphia for two days this week as KENX brought together its 8th Annual Validation University – 40-plus speakers from Johnson & Johnson, Bausch + Lomb, Abbott Laboratories, Sanofi, Alcon, the FDA, and dozens of other organizations – to take stock of where the industry stands and where it has to go.

Audience seated in a conference room attending a presentation, with a speaker at a podium and a projection screen displaying information.

The main session room at the Wyndham Historic District, Philadelphia — full and engaged from the opening remarks.

The city was a fitting backdrop. Philadelphia has always been where systems get built from scratch – where people sit down, disagree productively, and agree on frameworks that outlast the moment. The conversations here this week had that same quality. Not theoretical. Not promotional. Grounded in the operating reality of people who sign validation protocols, answer to inspectors, and are responsible for what happens when something goes wrong.

The Honest Diagnosis

Before anyone can talk about where validation is going, the profession has to be honest about where it is.

Validation, as most organizations still practice it, is a retrospective activity. You build a system. You run it. Then you prove it worked. That model made sense when manufacturing processes were static, software changed slowly, and the biggest variable was operator technique. It does not make sense in a world of continuous manufacturing, DevOps deployment cycles, cloud-based infrastructure, and AI models that learn from every data point they process.

“Why does validation still take months in a real-time world?”

That question – asked plainly by William Gargano, Group SVP at RCM Life Sciences, during the keynote – landed because everyone in the room already knew the answer. Legacy validation models. Operational silos. Risk aversion baked into organizational culture. A profession trained to document outcomes rather than design controls into the process from the start.

The cost of that gap is real. Not just in dollars – though the validation budget on complex projects routinely runs to 15-20 percent of capital expenditure, and the inefficiencies compound across every system lifecycle. The deeper cost is strategic. Organizations that cannot move faster through validation cannot move faster through manufacturing improvement, digital transformation, or technology adoption. The validation function becomes a bottleneck for the very outcomes it is designed to protect.

No one in Philadelphia this week was arguing that the answer is less validation. The argument; consistent across every track, every panel, every conversation in the exhibitor hall is that the profession needs to redesign what validation is for.

What AI Is Actually Doing to This Field

Artificial intelligence is not arriving in pharmaceutical quality systems. It is already there. The question the industry is working through – imperfectly, urgently, without the luxury of waiting for every regulatory guidance to finalize; is how to govern it.

Gianna Petrongolo, Quality Assurance Specialist and Policy Lead at Johnson & Johnson, opened the conference with practical examples from her team’s actual deployments. Gap assessments that once required weeks of manual document comparison now run in hours, with AI identifying differences between global and local procedures faster and more consistently than human review. Process monitoring systems flag specification drift in near real-time, before a deviation occurs. Risk prediction models analyze years of investigation history to surface recurring systemic issues and recommend preventive actions before the next event.

These are not pilot programs or proof-of-concept exercises. They are operational. And they are creating governance questions that the profession is only beginning to answer systematically.

Spending two days in that room makes something clear that industry surveys and published guidance don’t fully capture: people are already using AI. Not just in R&D or IT – inside GMP facility systems, in quality workflows, in the environments where regulatory consequence is highest. The adoption is not waiting for perfect frameworks. It is outrunning them. The conversations in Philadelphia were not about whether to start. They were about how to catch governance up to what is already happening on the floor.

“Think of AI as the car, and you as the driver. You don’t want AI driving the regulatory decisions.”

The regulatory space around AI is moving, but it is not moving uniformly. The FDA’s credibility assessment framework provides structure without mandating specific approaches. EU Annex 22; referenced in session after session across both days, applies to AI systems with direct impact on product quality, patient safety, and data integrity. In its current draft form, it favors static and deterministic models for GMP-critical applications, and is explicit about the constraints on dynamic, probabilistic models in high-stakes contexts. For practitioners in the room, Annex 22 was not an abstract future concern. It was a present-tense compliance question they are already working through.

A presenter speaking at a conference, with a presentation slide titled 'Interpreting Annex 22' displayed on a screen. The slide discusses computerized systems used in the manufacturing of medicinal products and the implications of AI in ensuring patient safety and data integrity.

Vivekram Apparsundaram of Zifo Technologies walks through the scope and interpretation of draft Annex 22.

The signal from regulators is consistent: they want to enable responsible adoption, not block it. The FDA has disclosed its own use of AI in submission review and compliance processes. The agency is actively engaging with industry on frameworks. But the gap between what the technology can do and what governance structures can confidently cover is real – and it falls on quality and validation professionals to close it.

That means human-in-the-loop review is not a design preference. It is a compliance requirement. It means model monitoring – versioning, drift detection, re-verification as data distributions evolve – must be built into lifecycle management from the start. It means the organization needs to know, at any given moment, which AI tools are in use, by whom, for what purpose, and what controls govern their outputs.

Data Integrity: The Foundation That Has to Hold

You cannot build trustworthy AI on untrustworthy data. That observation sounds simple. In practice, it surfaces one of the most persistent structural vulnerabilities in pharmaceutical quality systems.

ALCOA+ (Attributable, Legible, Contemporaneous, Original, Accurate) has been the foundation of compliant record-keeping for decades. FDA warning letters related to data integrity are not decreasing. They are increasing. And the organizations receiving them are not, by and large, organizations that set out to cut corners. They are organizations where digitization happened without governance, where audit trails exist but are not reviewed, where systems capture data without enforcing the controls that make that data defensible.

“Just because an audit trail exists doesn’t mean it’s being reviewed. Somebody needs to actually be looking at that data and making decisions off of it.”

AI is beginning to help in practical ways. Machine learning models can risk-stratify large digitized datasets, prioritizing review effort toward the highest-risk data rather than requiring equal scrutiny of everything. Generative AI can summarize complex audit trail entries – especially those involving formula changes, code, and macros – into human-readable risk categories that make review feasible at scale. The irony is precise: AI helps make human oversight of AI more manageable.

The panel session on FDA and industry dialogue reinforced a point that gets lost in the complexity of regulatory guidance: guidance documents are not binding. The FDA’s Computer Software Assurance guidance modifies how validation should be approached, but it does not change underlying statutes. Organizations that understand that distinction – and can articulate their risk rationale clearly to an inspector – have more operational flexibility than they often use. The QMSR update, aligning 21 CFR Part 820 with ISO 13485, shifts audit focus toward process and risk-based evaluation. Management accountability increases. The era of passing an audit by producing the right documents is ending.

A panel discussion taking place in a conference room, with a speaker presenting to an audience seated at tables. A presentation slide is displayed on the screen, featuring panelists' information.

The FDA & Industry Dialogue panel drew one of the largest audiences of the two-day event.

A panel discussion featuring industry leaders on validation oversight, with a presentation screen displaying their names and credentials, and an audience seated at tables in a hotel conference room.

Panelists Jorge Cordero (Bausch + Lomb), Khaled Moussally (Compliance Group), Daniel Walter (FDA/Genari AI), and Rosalind Beasley (Genari AI), moderated by Dori Gonzalez-Acevedo (ProcellaRx).

Change Control Is Where Good Intentions Break Down

If data integrity is the foundation, change control is where the structure either holds or cracks. It is also, as Sneha Saggurthi of BioBuzz argued in a session that pulled no punches, one of the most consistently mismanaged processes in pharmaceutical quality systems.

The core question in any change control is simple: is this change safe for the product and the patient? The process that surrounds that question is frequently anything but simple. Unstructured impact assessments. Reviewer routing that defaults to every department rather than the right departments. Changes implemented before QA approval. Regulatory considerations identified too late to act on.

A presenter speaking in front of an audience during a conference, with a slide titled 'Smart Change Control: Automation for Quality Decisions' displayed behind them.

Sneha Saggurthi, Industry Expert at BioBuzz, presenting Smart Change Control: Automation for Quality Decisions.

The consequences are not theoretical. Saggurthi walked through real cases: a 2024 FDA warning letter issued because manufacturing changes lacked documented change control; a 2023 Form 483 at another company for insufficient scientific justification and a change approved after lot release rather than before. A Biologics company cited for implementing an assay revision before the required FDA approval window had passed. These are not edge cases. They are the predictable result of processes that were designed for a slower world and haven’t been updated.

Automation offers a practical path forward – not to replace scientific judgment, but to enforce structure around it. Workflow logic can route a raw material supplier change to the right subject matter experts automatically, flag release testing implications for regulatory review, and surface training requirements as part of every change initiation. Dashboards can prioritize the highest-risk changes so QA attention goes where it matters most.

“Automation should guide the process. It should never own the decision.”

The boundary Saggurthi drew is the same one running through every AI-related conversation at this event: automation makes the process more consistent and less dependent on individual memory or expertise. It does not make human accountability optional. If anything, automating the structure around change control makes the human judgment at the center of it more visible, not less.

Infrastructure Has Changed. Validation Has Not Kept Up.

One of the most practically significant conversations at the event addressed something that rarely gets its own headline: the validation of modern infrastructure itself.

Pharmaceutical organizations are not running single on-premise applications anymore. They are running ecosystems – cloud platforms, edge devices collecting real-time sensor data, AI layers sitting on top of both, third-party SaaS tools embedded in quality and manufacturing workflows. Each component can be validated in isolation. The question being asked now by auditors, by quality leaders, by the organizations responsible for these systems; is whether the integrated ecosystem is defensible as a whole.

A speaker presenting on the topic of 'Validating Modern Infrastructure: AI, Cloud and Edge Computing' in a conference room, with an audience attentively listening.

Breiana Villella of Lives International presenting Validating Modern Infrastructure: AI, Cloud, and Edge Computing to a standing-room-near-capacity breakout.

The case study presented by Lives International and Cerberon Cybersecurity illustrated what it looks like when you build it right – migrating a validated thermal validation platform to Azure cloud, integrating edge devices (edge = sensors/probes collecting the data) as the real-time data source, and layering two AI levels for user assistance and template generation. Every AI-proposed output requires human approval before execution. Every approval is captured in the audit trail. Role-based access controls restrict who can act on AI-generated outputs. The system was independently tested for GxP-specific security posture – not just general cloud standards.

“Cloud makes infrastructure scalable, edge makes it real, AI makes it faster — but validation makes it legitimate.”

The lesson generalizes. Organizations migrating to cloud, deploying edge devices, or layering AI on existing infrastructure cannot approach validation as a post-migration activity. The controls have to be designed in. Validation has to be continuous. The audit trail has to span the ecosystem, not just its parts.

Chip Bennett of Project Farma grounded the AI-in-facilities conversation in something more immediate for many attendees: predictive maintenance, energy optimization, and space utilization analytics. These applications sit largely outside GMP-critical scope under current regulatory frameworks – but they are where many organizations will build their first real AI governance muscles. The skills transfer directly into higher-stakes applications.

A speaker presenting at a conference, standing next to a projector screen displaying an agenda related to the use of AI in the pharmaceutical industry. The speaker is dressed in a suit and has a beard.

Chip Bennett, Senior Director at Project Farma, presenting AI in Facility Monitoring and Predictive Maintenance.

The Kaye session made a similar point through a different lens. Real-time monitoring that moves from reactive alarms to predictive early warning is not a futuristic concept. The technology exists. What most organizations are still building is the validated, governed infrastructure to act on what the data is telling them before failure occurs – not after.

A presenter stands beside a screen in a conference room, discussing predictive response strategies with a diagram displayed.

Milton Alderman of Kaye Instruments presenting Predictive Monitoring in Pharma 5.0 — responding before failure, not after.

The Exhibitor Floor Told Its Own Story

The breaks between sessions were not really breaks. The exhibitor floor was active throughout both days – vendors, consultants, and technology providers in conversation with practitioners about specific problems, not general capabilities.

Exhibition display featuring Compliance Group and Kaye products, including banners and a table with informational materials at a conference setting.

The exhibitor showroom — Compliance Group and Kaye Instruments among the sponsors supporting the event.

Kaye Instruments, Lives International, Compliance Group, PQE Group, Mentor Technical Group, and others brought tools and services to the conversation that reflected exactly the terrain being discussed on stage: thermal validation, environmental monitoring, AI-enabled compliance platforms, digital CQV systems, and lifecycle management solutions. What stood out was not the technology itself but the specificity of the questions practitioners were asking. This was not a crowd looking for an introduction to AI. It was a crowd working through implementation.

The Profession Is Changing. So Is the Role.

There is a workforce dimension to all of this that the industry is only beginning to reckon with openly.

Validation has historically been a documentation function. Write the protocol. Execute the test. Record the result. File the report. That work is not disappearing – but its center of gravity is shifting. The organizations navigating AI and digital transformation most effectively are those where validation professionals are engaged earlier in design, where QA has a governance role rather than a checkpoint role, where cross-functional teams spanning quality, IT, engineering, and regulatory affairs make decisions together.

“QA’s job is governance now. If you have the right alarms and controls, quality control almost disappears. What remains is oversight.”

That shift requires different skills. Statistical fluency to interpret trend data. Technical literacy to evaluate AI model behavior. Governance instincts to design oversight frameworks that are robust without being paralyzing. Communication skills to translate validation requirements into business terms that leadership can act on. The breadth of the KENX program; from statistics and root cause analysis to risk-based sampling, digital twins, paperless CQV, and AI use cases – reflected the expanding scope of what this profession is now expected to do.

Philadelphia, Again

The Wyndham Historic District sits a few blocks from some of the oldest institutional architecture in American civic life. Independence Hall. Carpenter’s Hall. The Arch Street Meeting House, where Quakers gathered to debate abolition, governance, and moral responsibility at a moment when those questions had no settled answers.

The parallel is imperfect, as all parallels are. But there is something appropriate about the life sciences validation community gathering in this city to work through questions that also have no settled answers yet. How do you govern a technology that learns? How do you validate a system that changes? How do you maintain human accountability in an environment designed to reduce human error?

These are not technical questions with technical answers. They are governance questions that require the same combination of principle and pragmatism that built the frameworks the profession has relied on for decades.

KENX Validation University did not resolve them. No two-day event could. But it put the right people in a room, pushed the conversation past the surface, and sent practitioners back to their organizations with a clearer picture of where the industry is and what it’s being asked to do next.

The event continues its 2026 tour — San Diego, Dublin, Philadelphia again, Research Triangle Park, Toronto, Singapore, San Juan, and Amsterdam. The questions it is wrestling with will travel with it.

KENX 2026 CONFERENCE SERIES

March 19-20   |   Philadelphia, PA   |   Validation University + Pharmaceutical Engineering Summit

May 12-13   |   San Diego, CA   |   Harness the Power of AI in GxP – West

June 2-3   |   Dublin, Ireland   |   CSV & CSA University Europe

June 11-12   |   Philadelphia, PA   |   Laboratory University + GxP Supply Chain Summit

August 12-13   |   San Diego, CA   |   Sterility Assurance University

September 15-16   |   RTP, NC   |   Harness the Power of AI in GxP – East

October 7-8   |   Philadelphia, PA   |   MedTech Innovation & Validation + GMP University

November – December   |   Singapore | San Juan, PR | Amsterdam | San Diego

Nathan Roman is the founder of Validation Management Solutions (VMS), author of Six Steps to Effective Temperature Mapping, and creator of the Temperature Matters™ newsletter. He is a recognized specialist in thermal validation, environmental monitoring, and CQV services within the life sciences industry.

Nathan is also an active LinkedIn content creator and thought leader, known for sharing practical, real-world guidance that helps teams strengthen compliance, improve audit readiness, and execute validation with clarity.

He attended KENX Validation University in Philadelphia as both a participant and contributor, engaging with industry peers on advancing modern validation practices.