
GenRocket Blog
The meaningful use of electronic health records (EHR) has been at the forefront of the healthcare IT conversation since the 2014 American Recovery and Reinvestment Act. As of 2019, the CDC reports that 89.9% of healthcare practices such as physicians’ offices are actively using at least some or all of an EHR management system.
The healthcare sector is on the verge of a revolution in data and analytics, but the advancement of data-driven decision making has been hampered by difficulties in updating legacy systems, as well as challenges stemming from disparate data sources. With the growing push to digitize patient and claims information, the evolution of healthcare data exchange standards has come a long way to address some of these problems, but not all of them.
Enterprise Metadata Management is technology used to centrally manage and deliver high quality data and trusted information for business analysis and decision-making. Metadata is often referred to as “data about data” and describes the content, governance, and structure of enterprise information. Metadata is often used to create data catalogs that aggregate, group, and sort multiple data sources to make them accessible for a wide variety of use cases.
The strategy of Shifting left means performing software testing earlier in the SDLC so that defects can be detected when they are faster and easier to correct. Software bugs that escape to production can cost 100 times more to resolve than early in the product lifecycle.
Until recently, the global financial services industry (estimated at $22.5 trillion in 2021) has been slow to adopt cloud computing for its core processing functions. Entrenched legacy applications, uncertain cybersecurity risks, and regulatory compliance issues have all presented steep barriers to cloud adoption in the financial sector.
Obtaining test data for functional testing usually involves copying and subsetting the production data values used by the software under test. Production data must be carefully masked to comply with data privacy regulations and is often provisioned for testers by a dedicated test data support team. The assumption behind this approach is that production data is realistic, readily available, and made secure for testing.
A global financial services company follows an Agile development process to continuously update their core applications. They have established a continuous delivery pipeline for releasing new features into production and they are leveraging test automation tools to accelerate the cycle time for each release. They also established a rigorous regression testing framework to ensure software defects are caught before they are introduced into the production environment. Fixing bugs in production is time-consuming, expensive and negatively impacts the digital customer experience.
QA professionals know the importance of full test coverage for catching hidden software defects. The impact of undetected bugs that leak into production can range from inconsequential to nothing short of catastrophic. A recent Internet outage provides a real-world example of how a single bug can disrupt the global operation of digital business on the Internet.
In traditional Test Data Management (TDM), the test data lifecycle is based on the premise that some or all data used for testing is sourced from production. A full or subsetted copy of a production database is transferred to a clean room where it is examined for personally identifiable information (PII) and masked or anonymized before being transferred to a non-production environment for testing.