GAMP 5 Categories Explained A Practical Guide for CSV Teams


Published on 28/11/2025

GAMP 5 Categories Explained: A Practical Guide for CSV Teams

1. Understanding GAMP 5 Categories: The Framework

The GAMP 5 framework, which stands for Good Automated Manufacturing Practice, serves as a guideline for ensuring the quality and integrity of computerized systems used in the pharmaceutical, biotech, and medical device industries. The guidelines, as laid out by the ISPE (International Society for Pharmaceutical Engineering), are crucial for computer system validation (CSV) and aim to provide a consistent approach to the validation of software and computer systems that support regulated processes. The GAMP 5 categories are pivotal to classifying software based on complexity, usage, and risk, which in turn informs the validation approach.

At the core of GAMP 5 is a risk-based approach. This is fundamental in prioritizing validation efforts where they are needed most, enabling teams to allocate resources efficiently while

ensuring compliance with regulatory expectations from organizations such as the FDA, EMA, and MHRA. Understanding these software categories is essential for regulatory professionals as improper classification could lead to regulatory non-compliance and issues in data quality and integrity.

Each GAMP 5 category consists of software solutions ranging from simple spreadsheet applications to complex bespoke software systems. The full classification involves five distinct categories: Category 1 (Infrastructure Software), Category 2 (Non-configurable Software), Category 3 (Configurable Software), Category 4 (Custom Software), and Category 5 (Legacy Software). These categories are categorized according to their complexity and the degree of validation needed.

Furthermore, understanding GAMP 5 categories is not merely an academic exercise. The regulatory landscape is shifting towards more stringent scrutiny of software and data integrity, which makes comprehensive knowledge of these categories and their applications vital for effective compliance and validation efforts.

2. Gathering User Requirements Specification (URS)

The first and one of the most critical steps in the validation process is the identification of user requirements. The User Requirements Specification (URS) defines the foundational aspects of what users need from the system, acting as a preliminary document that informs the project scope and expectations. An adequately developed URS is essential not only for compliance but for ensuring that the final system meets the intended purpose effectively.

To create a robust URS, stakeholders—ranging from end-users to regulatory professionals—must collaborate closely. The URS should detail functionalities, performance criteria, and regulatory compliance needs. It must also reflect a clear understanding of how the software will operate within the regulated environment.

Best practices suggest conducting workshops or interviews with end-users to capture their needs and expectations accurately. The gathered data must then be documented in a clear and concise manner, including requirements for audits, data retrieval, and reporting functionalities. The URS also acts as a basis for future validation activities, including design qualification, IQ/OQ/PQ, and ongoing compliance.

Furthermore, it is crucial that the URS is periodically reviewed and approved by all stakeholders to confirm alignment and completeness against current industry standards, such as ICH Q8, ICH Q9, and ICH Q10 regulations. Failure to establish a clear URS can lead to scope creep, overspending on unnecessary features, or, worse, system failures that may lead to regulatory action.

3. Conducting Design Qualification (DQ)

Once the URS is established, the next step is Design Qualification (DQ), which is a critical component in the validation lifecycle. DQ is performed to ensure that the system design adequately meets the requirements specified in the URS and adheres to industry best practices and regulatory requirements.

During DQ, teams review the design documentation provided by the vendor or development team. This includes system architecture, software specifications, functional requirements, and system configuration details. A comprehensive design review helps identify potential risks and gaps in the design, allowing for timely corrections before the system moves forward in the validation lifecycle.

The DQ process must be documented, with all findings and decisions recorded. This documentation serves as evidence of compliance during inspections by regulatory authorities like the EMA or MHRA. If issues are identified during the DQ phase, they must be addressed and resolved before proceeding to Installation Qualification (IQ).

Additionally, it’s recommended to align the Design Qualification process with a risk-based approach. This means prioritizing the review of features that have higher implications on product quality, patient safety, or regulatory compliance. The resulting documentation should include a formal sign-off from involved stakeholders, marking the transition from the design phase to the installation phase.

4. Performing Installation Qualification (IQ)

Installation Qualification (IQ) is an essential step in the validation lifecycle, confirming that the system is installed correctly according to the approved design and complies with specified requirements. This phase is primarily focused on the technical aspects of the installation, confirming that the installed system is consistent with what was designed.

Key components of IQ include confirming that hardware and software components are properly in place and functioning as intended. This can include checks on the operating environment, access controls, and installation of requisite firmware and software patches. Validation teams should employ checklists to ensure that all aspects of the installation are evaluated, which can serve as a guide during the process.

During the IQ, it is also important to verify system configurations and interfaces with other systems, ensuring that data integrity and security measures are in place. Documentation produced during this phase serves as integral evidence of compliance, especially when subject to regulatory audits. Furthermore, Audit trails and electronic records must be assessed to ensure they are fully operational and comply with 21 CFR Part 11 requirements.

Upon successful completion of IQ, the validation team should issue IQ protocols and reports, which should include any discrepancies or deviations noted during the installation process. These reports must be signed off by the qualified personnel and can address corrective and preventive actions (CAPA) if required.

5. Executing Operational Qualification (OQ)

After installation, the next phase is Operational Qualification (OQ), which evaluates if the system operates as intended throughout its specified operational ranges. OQ aims to ensure that all system functions and features perform as intended within defined limits.

OQ testing consists of executing predefined test scripts that assess all system functionalities against the User Requirements Specification. This includes testing for functionalities such as data input methods, process interfaces, and output reliability. Each potential failure point should be documented and assessed for risk in terms of product quality and operational efficiency.

The OQ phase involves a comprehensive examination of the software’s capability to function correctly across a range of conditions, including hardware configurations and load conditions. This is crucial, especially for systems that handle sensitive data or operate high-stakes processes in compliance with GxP standards.

Results from the OQ must be well-documented. Any failures must lead to immediate corrective actions and retesting, ensuring all functionalities meet the expected requirements before moving onto Performance Qualification (PQ). At this point, a final review of the operational environment, hardware specifications, and software configurations must also be completed, as part of demonstrating that the system is equipped and capable to perform as designed.

6. Performance Qualification (PQ): The Final Testing Phase

Performance Qualification (PQ) is the validation step that verifies that the system meets the needs outlined in the User Requirements Specification under actual operating conditions. This crucial phase ensures that the system performs consistently and reliably in a manner that meets predefined quality standards.

During PQ, representative samples must be tested to ensure that the system holds under actual operational conditions. This may involve long-term operational runs that check for data integrity, system response times, and stability under various scenarios. Critical parameters and their corresponding acceptance criteria must be predefined in the PQ protocol.

The coordination of stakeholders is vital during PQ. It is typical to have a multidisciplinary team, including QA, IT, and end-users, to oversee and validate the testing process. The data generated during PQ should be meticulously collected and analyzed for consistency and compliance.

A well-documented PQ report should outline the testing conducted, results obtained, and any discrepancies noted during the testing phase. Importantly, should the results of the PQ fail to meet any acceptance criteria, corrective actions must be set in motion to remedy the situation before any product release occurs.

Upon successful completion, the PQ serves as a confirmation that the system is ready for operation in the regulated environment. This phase concludes the validation process for new systems but is indicative of ongoing compliance and system reliability.

7. Continuous Performance Verification (CPV)

Continuous Performance Verification (CPV), also referred to as ongoing monitoring, is the phase in the validation lifecycle that emphasizes the ongoing evaluation of system performance throughout its operational life. This component is essential for ensuring sustained compliance with regulatory requirements and operational efficacy.

CPV involves monitoring system performance against established specifications and user requirements, including audits of system outputs and error detection. Vendors and stakeholders should establish a protocol for monitoring system reliability and performance post-implementation. This can include periodic reviews of batch records, audit trails, and system logs that help ensure that the system remains compliant with necessary guidelines.

As part of CPV, organizations must embrace a proactive risk-based approach, assessing the potential impacts of system changes and considering factors such as process updates or changes in regulatory requirements. Any identified risks should lead to predetermined contingency plans and ongoing training needs for users.

Documentation of CPV activities is crucial. This should include evidence of regular system monitoring, results of assessments, changes made to system configurations, and any CAPA actions taken to address newly identified risks or failures. Regular reviews should be scheduled with a multidisciplinary team to address concerns proactively, ensure compliance, and maintain data integrity in accordance with ICH Q10 principles.

8. Revalidation and Change Control

Revalidation is a critical aspect of lifecycle management that ensures the continued compliance of validated systems following changes in processes, software, or regulatory requirements. Any modifications in procedures, technology, or regulatory updates necessitate a thorough revalidation process to confirm the system’s sustained reliability and compliance.

In this phase, risk assessments must guide the decision-making process. Typically, a change control system should be established that categorizes change types, assesses impacts, and outlines necessary revalidation steps. For example, a complete overhaul of a software component may require full re-validation, whereas minor updates might only need targeted testing or documentation adjustments.

In ensuring regulatory compliance, the documentation surrounding revalidation activities should mirror that of the initial validation. All results from risk assessments and testing should be documented and stored alongside the original validation documentation. Stakeholder approval is necessary before proceeding with any significant changes post-validation.

Understanding the implications of the risk-based approach here is vital; many changes can propagate risks that can affect product quality or patient safety if not adequately monitored or investigated. Tools like Failure Mode and Effects Analysis (FMEA) or generic Risk Assessment templates can guide teams through the implementation of a structured approach.

Continuous training and awareness within teams dealing with CSV are essential in navigating the complexities of revalidation and change control. This ongoing education fosters a culture of awareness that ensures all stakeholders are informed of their responsibilities regarding compliance and regulatory change impacts.