US Health Data News



 
Datacom 160 million - Health dept extends Datacom outsourcing deal for $160m
 
Data brokers / diginomica - Data brokers and the implications of data sharing
 
DXC / CMS - DXC books $81M CMS data warehouse support order
 
DE 8-State - Delaware joins eight-state health care data sharing initiative
 
GA DPH Grants - Georgia Department Of Public Health Awarded Grants
 
HHS 2019-11 fine - Texas health department to pay $1.6M for HIPAA violations
 
HHS Protect from CPI - New Secretive Data System Shaping Federal Pandemic Response
 
IBM 100 IT - IBM Announces $100 Million Health IT Program
 
Governance Key - Governance key to creating effective health data warehouse
 
Kaiser Apps - Analysis: App-Happy Health Care Full of Optimism Money
 
UITALD - Interactive Tools to Assess the Likelihood of Death
 
MA DW - Massachusetts Senate passes $1.7B bond bill
 
Michigan DW - Michigan saves $1 million per business day with data warehouse
 
Digital Health Care - The Digital Health Care Environment
 
Medicare Use - Feds to allow use of Medicare data to rate doctors hospitals and other health care providers
 
Most Expensive - 10 Most Expensive Hospitals in the U.S.
 
NAPHSIS FOD - NAPHSIS Releases New Fact of Death Query Service
 
Premier-IBM - IBM and the Premier healthcare alliance to integrate nation's healthcare data
 
SAS Disaster - Behind Georgia’s Covid-19 dashboard disaster
 
SC PHG - S.C. public health group gets $11.25 million grant
 
TX HHSC #1 - Problem-plagued Texas data project delayed again
 
TX HHSC #2 - Massive Health Data Warehouse Delayed Again
 
TX HHSC #3 - Texas HHSC privacy breach may affect 1.8k individuals
 
UPMC - $100 Million Investment in Sophisticated Data Warehouse and Analytics
 
Veritas / TR - Veritas to Buy Thomson Reuters Health Care Data Management Line
 
VT Data Warehouse - Audit Questions Health Information Exchange Oversight in VT
 


Posted Jul 14, 2020 at 5:00 PM

Link to original article

Sen. Cindy Friedman, D-Arlington, recently joined her colleagues in passing a $1.7 billion General Government Bond Bill focused on capital improvements to improve government infrastructure, empower communities disproportionately impacted by the criminal justice system, support early education and care providers with safe reopening during the COVID-19 pandemic and expand equitable access to remote learning opportunities for vulnerable populations across the commonwealth.

Building on the Senate’s efforts to address issues of racial equity and support communities of color, the bond bill authorizes $50 million in new economic empowerment and community reinvestment capital grants to support communities disproportionately impacted by the criminal justice system with access to economic and workforce development opportunities.

Friedman successfully secured a $2.5 million technology investment authorization to automate the Criminal Offender Record Information, or CORI, system for sealing criminal records. Under the current system, sealing a criminal record can take months — meanwhile employers, landlords, bankers and others turn people away from employment, housing and financing opportunities based on minor or old incidents that appear on CORIs.

“Our antiquated CORI system is just one example of how our system continues to disproportionally impact people of color,” said Friedman. “Now more than ever, we should be investing in the things that strengthen our communities, support our most vulnerable residents and help people restart their lives rather than penalize them for life. I’m pleased that these funds were authorized in this bill, and am grateful for my Senate colleagues for moving this important piece of legislation forward.”

In addition to empowering economically disadvantaged communities, the Senate’s bond bill authorizes capital investments to ensure accountability in public safety and modernize criminal justice data collection by providing $20 million for a body camera grant program for police departments and $10 million for a statewide criminal justice data system modernization to help better track racial and ethnic disparities across the judicial and public safety systems.

To ensure equitable access to remote learning opportunities and safe access to early child care opportunities, the Senate bond bill authorizes $50 million to enhance and expand access to K-12 remote learning technology and provides $25 million to assist licensed early education and care providers and after school programs with capital improvements to ensure safe reopening during the COVID-19 public health emergency.

The bill also addresses growing food insecurity and food supply chain needs across the commonwealth due to the COVID-19 pandemic, by authorizing $37 million for a food security grant program to address infrastructure needs for farms, retailers, fisheries, food system businesses and food distribution channels.

Additional components of the bond bill include:

• $140 million for cybersecurity upgrades to improve the commonwealth’s technology and telecommunications infrastructure.

• $115 million for municipal library improvements.

• $100 million for governmental performance and informational technology infrastructure upgrades.

• $30 million for public higher education public safety grants.

• $25 million for fire safety equipment grants.

• $20 million for municipal broadband access grants.

• $5 million for the development of a common application for MassHealth enrollees to more easily access the federal Supplemental Nutrition Assistance Program.

• $2.9 million for a public health data warehouse to track population health trends, such as COVID-19.

• $2.5 million for implementation of an automated electronic sealing process to seal certain criminal records. The bill returns to the Massachusetts House of Representatives where a similar bill has passed. The Senate expects differences between the two versions to be resolved quickly.
 




Link to original article

HHS Protect, at the center of health agency clashes, was created after the CDC’s long struggle to modernize.

New, secretive data system shaping federal pandemic response Introduction A long history of data frustration Under wraps September 22, 2020 Liz Essley Whyte Reporter INTRODUCTION The Center for Public Integrity is a nonprofit newsroom that investigates betrayals of public trust. Sign up to receive our stories.

As deadly Ebola raged in Africa and threatened the United States, the Centers for Disease Control and Prevention pinpointed a problem: The agency had many sources of data on the disease but no easy way to combine them, analyze them on a single platform and share the information with partners. It was using several spreadsheets and applications for this work — a process that was “manual, labor-intensive, time-consuming,” according to the agency’s request for proposals to solve the problem. It spent millions building a new platform.

THIS CONTENT IS AVAILABLE FOR REPUBLISHING We ask that you credit our newsroom at the top with a line that says, “This article was originally published by the Center for Public Integrity, a nonprofit investigative news organization based in Washington, D.C.” and link to our homepage. Photo rights not included.

But at the beginning of the coronavirus pandemic, the CDC still struggled to integrate and share data. The system it had built during the Ebola crisis wasn’t up to the task. An effort to modernize all of the agency’s data collection and analysis was ongoing: One CDC official told a congressional committee in March that if the agency had modern data infrastructure, it would have detected the coronavirus “much, much sooner” and would have contained it “further and more effectively.”

By April, with coronavirus cases spiking in the U.S. and officials scrambling to wrangle information about the pandemic, the CDC had a proof-of-concept for a new system to pull together all of its various data streams. But it was having trouble figuring out how to securely add users outside the agency, as well as get the funding and political backing needed to expand it, according to two sources with close knowledge of the situation.

So the CDC turned to outsiders for help. Information technology experts at the federal Department of Health and Human Services took control of the project. Five days later, they had a working platform, dubbed HHS Protect, with the ability to combine, search and map scores of datasets on deaths, symptoms, tests, ventilators, masks, local ordinances and more.

The new, multimillion-dollar data warehouse has continued to grow since then; it holds more than 200 datasets containing billions of pieces of information from both public and private sources. And now, aided by artificial intelligence, it is shaping the way the federal government addresses the pandemic, even as it remains a source of contention between quarreling health agencies and a target for transparency advocates who say it’s too secretive.

The Center for Public Integrity is the first to reveal details about how the platform came to be and how it is now being used. Among other things, it helps the White House and federal agencies distribute scarce treatment drugs and supplies, line up patients for vaccine clinical trials, and dole out advice to state and local leaders. Federal officials are starting to use a $20 million artificial intelligence system to mine the mountain of data the platform contains.

People familiar with HHS Protect say it could be the largest advance in public health surveillance in the United States in decades. But until now it has been mostly known as a key example of President Trump’s willingness to sideline CDC scientists: In July, his administration suddenly required hospitals to send information on bed occupancy to the new system instead of the CDC.

The Trump administration has added to the anxiety surrounding HHS Protect by keeping it wrapped in secrecy, refusing to publicly share many of the insights it generates.

“I want to be optimistic that everything is happening here is actually a net improvement,” said Nick Hart, CEO of the Data Coalition, a nonprofit that advocates for open government data. “The onus is really on HHS to explain what’s happening and be as transparent as possible… It’s difficult to assess whether it really is headed in the right direction.”

A LONG HISTORY OF DATA FRUSTRATION

To hear some tell it, the reason behind the CDC’s long struggle to upgrade its data systems can be learned in its name: the Centers — plural — for Disease Control and Prevention. Twelve centers, to be exact, and a jumble of other offices, each with its own expertise and limited funding: the National Center for Immunization and Respiratory Diseases, for example, or the Center for Preparedness and Response. Scientists at each myopically focus on their own needs and strain to work together on expensive projects to benefit all, such as upgrading shared data systems, experts familiar with the CDC said. A 2019 report from the Council of State and Territorial Epidemiologists found that the agency had more than 100 stand-alone, disease-specific tracking systems, few of them able to talk to each other, let alone add in outside data that could help responders stanch outbreaks.

“CDC has been doing things a certain way for decades,” said a person familiar with the creation of HHS Protect who was not authorized to speak on the record. “Sometimes epidemiologists are not technologists.”

The U.S. government knew for more than a decade it needed a comprehensive system to collect, analyze and share data in real time if a pandemic reached America’s shores. The 2006 Pandemic and All-Hazards Preparedness Act directed federal health officials to build such a system; in 2010 the Government Accountability Office found that they hadn’t. A 2013 version of the law required the same thing; in 2017 the GAO found again that it hadn’t happened. Congress passed another law in 2019 calling for the system yet again. In 2020 the coronavirus struck.

“We’ve had no shortage of events that have demonstrated the importance of bringing together both healthcare and public health information in a usable, deeply accessible platform,” said Dr. Dan Hanfling, a vice president at In-Q-Tel, a nonprofit with ties to the CIA that invests in technology helpful to the government. “We’ve missed the mark.”

In fighting a pandemic, the nation struggles with data at every turn: from collecting information about what’s happening on the ground, to analyzing it, to sharing it to sending information back to the front lines. The CDC still relies on underfunded state health departments using antiquated equipment — even fax machines — to gather some types of information. The agency for years has also had ongoing, formal efforts to upgrade its data processes.

“There’ve been a lot of false starts in this area,” said Dr. Tom Frieden, the head of the CDC during the Obama administration. Frieden blamed money already spent on existing systems and local governments unwilling to make changes, among other reasons. “We had decades of underinvestment in public health at the national, state and local levels, and that includes information systems.”

“The way to make Americans safer is to build on, not bypass, our public health system,” says Dr. Tom Frieden, head of the CDC during the Obama administration. (Vital Strategies) The CDC attempted to fix at least some of those problems — joining and analyzing and sharing data from disparate sources — with the system it built during Ebola, known as DCIPHER. The system saved the agency thousands of hours of staff time as it responded to a salmonella outbreak and lung injuries from vaping. But it couldn’t keep up with the coronavirus. It was stored on CDC servers instead of the cloud and couldn’t handle the flood of extra data and users needed to fight COVID-19, according to two sources with knowledge of the situation.

So CDC officials handed the proof-of-concept for a new system to the chief information officer of HHS, Jose Arrieta. The CDC was having trouble figuring out how to approve and ensure the identities of new users from outside the agency, such as the White House Coronavirus Task Force, and give them appropriate permissions to view data, according to two sources with close knowledge of the situation. Arrieta and his team solved the technical problems, stitching together eight pieces of commercial software to build the platform and pulling in data from both private and public sources, including the CDC.

“Our goal was to create the best view of what’s occurring in the United States as it relates to COVID-19,” said Arrieta, a career civil servant who has worked for both Republicans and Democrats, speaking for the first time since his sudden departure from HHS in August. He said, and a friend confirmed, that he left his job primarily to spend more time with his young children after months of round-the-clock work. “It changes public health forever.”

HHS Protect now helps federal agencies distribute testing supplies and the scarce COVID-19 treatment drug remdesivir, identify coronavirus patients for vaccine clinical trials, write secret White House Coronavirus Task Force reports sent to governors, determine how often nursing homes must test their staffs for infection, inform the outbreak warnings White House adviser Dr. Deborah Birx has been issuing to cities in private phone calls — and more.

The system allows users to analyze, visualize and map information so they can, for example, see how weakening local health ordinances could affect restaurant spending and coronavirus deaths in mid-size cities across America. Arrieta’s team assembled the platform from eight pieces of commercial software, including one purchased via sole-source contracts worth $24.9 million from Palantir Technologies, a controversial company known for its work with U.S. intelligence agencies and founded by Trump donor Peter Thiel. CDC used the Palantir software for both the HHS Protect prototype and DCIPHER, and it works well, Arrieta said; contracting documents cited the coronavirus emergency when justifying the quick purchase.

And now a new artificial intelligence component of the platform, called HHS Vision, will help predict how particular interventions, such as distributing extra masks in nursing homes, could stanch local outbreaks. Arrieta said HHS Vision, which is not run with Palantir software, uses pre-written algorithms to simulate behaviors and forecast possible outcomes using what experts call “supervised machine learning.”

Though many of the datasets in HHS Protect are public, a scientist who wanted to use them would have to hunt for them from many agencies, clean them and help them relate to one another. That work is already done in HHS Protect.

“It is a big leap forward,” said Dr. Wilbert van Panhuis, an epidemiologist at the University of Pittsburgh who is working to get access to the platform for a group of 600 researchers. “They are making major progress in this pandemic.”

But the new system became a source of controversy this summer when officials told hospitals to stop reporting information on beds and patients to a well-known and revered CDC system, the National Healthcare Safety Network, and instead send it to Teletracking, a private contractor connected to HHS Protect. Observers feared the move undermined science and was another example of political interference with the CDC’s work. In August, hospital bed data from Teletracking sometimes diverged wildly from what states were reporting, though now it aligns more closely, said Jessica Malaty Rivera, science communication lead for the Covid Tracking Project, a volunteer organization compiling pandemic data.

“If there’s one major lesson we have from emergencies in the last 20 years… it’s not to try to create a new system but take the most robust system you have and scale it,” Frieden said. “The way to make Americans safer is to build on, not bypass, our public health system.”

Some familiar with the switch from the CDC to Teletracking said it allowed the federal government to compile more data on more hospitals. It happened, they said, because the White House task force members asked for more hospital information to prepare for the winter. Teletracking was able to start collecting extra data from hospitals in a matter of days, while the CDC said it would take weeks to make those changes.

“Our goal was to create the best view of what’s occurring in the United States as it relates to COVID-19.”

JOSE ARRIETA, FORMER CHIEF INFORMATION OFFICER OF HHS

A CDC official familiar with the situation disputed those claims, saying that the National Healthcare Safety Network provided excellent data without overburdening already-stressed hospitals. Making the switch to HHS Protect, he said, is “like taking a veteran team off the field to replace that team with rookies. You get a lot of rookie mistakes.”

The hospital data dust-up aside, some CDC officials remain skeptical of HHS Protect.

“It is a platform. It isn’t a panacea,” said a CDC official familiar with the system who didn’t want his name published because he wasn’t authorized to speak to the media. Some of the outside data sources HHS Protect depends on — including the hospital data from Teletracking — aren’t reliable, the official said, sometimes showing, for example, that a hospital had a negative number of patients in beds. “We’re seeing enough of it to warrant overall big-time concerns about the hospital data quality.”

Some are also concerned about the system’s ability to guard patient privacy: More than a dozen lawmakers sent a letter to HHS Secretary Alex Azar in July questioning how HHS Protect would protect individuals’ privacy.

But officials say HHS Protect contains no personal information on patients or others. It tracks users’ every interaction with the data and blocks them from datasets they don’t have authority to see, allowing the federal government to guard privacy and prevent data manipulation, sources familiar with the system said.

UNDER WRAPS

The Trump administration adopted data principles in 2018 that include promoting “transparency… to engender public trust.” But much of the data in HHS Protect remains off limits to the public, glimpsed only in leaked reports and occasional mentions by White House task force members. The platform’s public web portal displays the hospital bed data that caused so much controversy this summer but little else. Observers of all stripes, from Frieden to the conservative Heritage Foundation, have called for the Trump administration to make more of its data public.

Van Panhuis said HHS Protect clearly was designed with federal government users in mind, not academic researchers or the public.

“It’s a bit disappointing,” he said. “Currently we have to invent that part of the system.”

Basic data about the pandemic contained in HHS Protect remains secret and is sometimes obscured even from local public health officials. The White House task force’s secret recommendations to governors use HHS Protect data on cities’ test positivity rates, but the White House does not release those reports. And that national dataset is still nowhere to be found on any federal website. When asked, an HHS spokesperson could not point to it.

Some secrecy surrounding HHS Protect data exists for good reason, officials said: Some private companies share their data with HHS on the condition that it will be used to respond to the public health crisis and not be revealed to competitors. And releasing some of the data, even though they contain no personal information, could trigger privacy concerns, forcing officials to redact some of it. For example, it might become obvious whose symptoms were being described in data from a small, rural county with one hospital and one coronavirus patient.

But the secrecy around HHS Protect frustrates transparency advocates who want government data to be shared more openly.

Ryan Panchadsaram, who helps run the coronavirus data website Covid Exit Strategy, would like HHS Protect to publish in one location information on cases, test results and other metrics, for every city and county in the U.S., in an easily accessible and downloadable format.

“Making it available to the public shouldn’t be that difficult,” he said. “It’s a political and policy decision.”

People looking for county-level information — to make decisions about whether to visit grandparents, for example — are often out of luck. And if they want a one stop-shop for state-level data, they must turn to private sources: Panchadsaram said that even employees of state and federal agencies visit Covid Exit Strategy for information on the coronavirus. The state of Massachusetts uses his site’s data to decide which travelers must quarantine when they arrive.

“It is shocking that they come to us when the data is sitting in its purest form” in HHS Protect, he said.

Federal officials, attempting to deliver on at least some transparency promises, say they are working to set up congressional staffers with logins to HHS Protect. Staffers monitoring the pandemic say they have yet to be granted access, though some states are using the system.

The secrecy surrounding HHS Protect also means that outsiders also can’t evaluate whether the platform is living up to its promise. Despite repeated requests from Public Integrity, HHS and CDC spokespeople did not make any officials available for on-the-record interviews regarding HHS Protect.

“The federal government has an obligation to make as much data and information public as possible,” said Hart, of the Data Coalition. “HHS should consider ways to improve the information it’s providing to the American people.”

Zachary Fryer-Biggs contributed to this report.
 


Beckers's Hospital Review - November 9, 2019

Link to original article

The Office for Civil Rights at the HHS slapped the Texas Health and Human Services Commission with a $1.6 million fine for HIPAA violations, according to a Nov. 7 news release.

Specifically, the OCR was penalizing the Department of Aging and Disability Services for its data breach in 2015. The department reorganized into the Texas Health and Human Services Commission in September 2017.

In a report to the OCR, the department indicated that the electronic protected health information of 6,617 individuals was accessible online. Patient data that was exposed included names, addresses, Social Security numbers and treatment information.

The department said during the move of an internal application from a private server to a public server a flaw in the software code allowed unauthorized users access to individuals' information. The OCR investigation found that the department failed to conduct and enterprise-wide risk analysis and implement access and audit controls for its information systems and applications.

"Covered entities need to know who can access protected health information in their custody at all times," said the OCR Director Roger Severino. "No one should have to worry about their private health information being discoverable through a Google search."
 


Silver Spring, MD (PRWEB) March 01, 2017

Link to original article

The National Association for Public Health Statistics and Information Systems (NAPHSIS) announced today the release of a new Fact of Death (FOD) Query Service http://www.naphsis.org/evvefod, providing credentialed organizations the ability to quickly, reliably, and securely discover if a death record exists. This service is part of the NAPHSIS Electronic Verification of Vital Events (EVVE) System and is the only service in existence with the ability to match authorized queries against the databases of state or local vital record jurisdictions, where all death records in the nation are stored.

We've been swamped by requests for death data from a variety of industries. Access to complete, timely, and accurate death record data does not currently exist in the United States," says Anthony Stout, manager of EVVE products and services. ?EVVE Fact of Death resolves this problem and can save the country and our customers millions, if not billions, of dollars a year".

Before today, the Social Security Administration (SSA) Death Master File (DMF) https://www.ssa.gov/dataexchange/request_dmf.html was the primary source for death record data. However, its usefulness has been severely hampered since November of 2011, when the SSA was no longer allowed to include state protected death records in the DMF. As a result, millions of death records are missing every year from the DMF, making it woefully incomplete and unusable for many organizations requiring this information to help prevent fraud, protect identities, reduce waste, and streamline business processes.

Currently, there are 37 of 57 states and jurisdictions participating in the EVVE FOD service, allowing credentialed users to match against more than 55 million death records. The number of participating jurisdictions is increasing steadily, and all 57 states and jurisdictions across the nation are working to join the EVVE FOD service as soon as possible.

As death record data includes highly sensitive and personal information, confidentiality and security of such data is of upmost importance. To ensure this service utilizes the highest levels of security possible, NAPHSIS has partnered with LexisNexis® VitalChek Network Inc. (VitalChek) http://vitalcheknetwork.com/ to maintain the EVVE Fact of Death Query Service. VitalChek adheres to all major InfoSec standards such as PCI-DSS, SOC 1, SOC 2, and uses public key / private key encryption technology to ensure incoming requests and outgoing results are secure.

An organization that has a valid need for death record data, and belongs to one of the following current categories, may be credentialed to use EVVE Fact of Death:

Federal-Benefits or Admin State/Local-Benefits or Admin Pension/Retirement Insurance Receivables Financial

Organizations can become credentialed EVVE Fact of Death users by visiting the website at http://www.naphsis.org/evvefod, clicking on the "Get Started Now" link at the bottom of the page and following the prompts. The process is easy, and qualified customers can expect to be using EVVE Fact of Death within a week. There is a minimal per-record price for credentialed private companies and/or government agencies to use the EVVE Fact of Death service. About NAPHSIS: The National Association for Public Health Statistics and Information Systems (NAPHSIS) is the national nonprofit organization representing the state vital records and public health statistics offices in the United States. Formed in 1933, NAPHSIS brings together more than 250 public health professionals from each state, the five territories, New York City, and the District of Columbia. Contact: Anthony N. Stout Manager- EVVE Products and Services 301-563-6005 / evvefod(at)naphsis(dot)org
 


Audit Questions Health Information Exchange Oversight in VT
Health IT Interoperability
By Kyle Murphy, PhD
October 06, 2016

Link to original article

An audit of health information exchange activities in Vermont has yielded more questions than answers about healthcare interoperability in the state.

Officials at the Department of Vermont Health Access (DVHA) tasked with overseeing the development of a statewide health information exchange have drawn criticism from the state's auditor for their oversight of millions of dollars in grants and contracts. Health information exchange in Vermont In a report released late last month, State Auditor Douglas R. Hoffer found DVHA to have fallen short in two areas: evaluating the actions taken by Vermont Information Technology Leaders, Inc. (VITL) — the exclusive operator of the statewide HIE network — and measuring the latter's performance over the previous two fiscal years, FY 2015 and 2016.

According to the report, the state department issued $12.3 million during that time, representing close to one-third of total funding ($38 million) paid to VITL since 2005. Oversight of the VITL HIE contracts and grants fell to both DVHA and the Agency of Administration (AOA).

Deficiencies in oversight have raised doubts about the development of a clinical data warehouse to be used for health data analysis and reporting.

"Although the State assented to VITL building the warehouse, it was not explicitly included in any agreement as a deliverable, nor did the State define its functional and performance requirements. Without such requirements, the State is not in a position to know whether the clinical data warehouse is functioning as it intends," the report states.

Upon closer inspection, the building of a clinical data warehouse casts doubt on the state's handling of agreements with VITL, which according to the state's audit cited unclear language as authorization for the system.

"Even if we accept that this language authorizes the construction of a clinical data warehouse, which we believe is unclear, no evidence was provided to indicate that the State defined the functional and performance requirements of the warehouse," the report reads. "Without such requirements, the State is not in a position to know whether the clinical data warehouse is functioning as it intends."

Uncertainly also extends to the ownership and use of the clinical data warehouse as a lack of explicit language appears to indicate that the state is the licensee of the software used but its ability to make use of the data is restricted by the healthcare organizations providing the information comprising the system.

"Accordingly, VITL contends that the agreements do not currently permit VITL to disclose the personal health information in the warehouse to the State and, therefore, the State does not have any rights to access, use, or disclose this data," the report states.

As it turns out, the case of the clinical data warehouse was a microcosm of a much larger issue of poor programmatic and financial oversight of VITL. DVHA often failed to finalize agreements with VITL prior to project start dates — in five of six agreements. These delays had several consequences:

First, having VITL perform work without a signed agreement inhibited the State’s ability to hold VITL accountable to desired standards because they had not been formally documented and agreed upon. Second, the Green Mountain Care Board reported that delays in finalizing VITL’s contracts resulted in uncertainty about what terms would ultimately be agreed to or omitted, what work should be prioritized, and if and how to allocate staff, contractors, and other resources to various projects. Third, because of the four-month delay in signing contract #30205, VITL and the State agreed to eliminate two required deliverables (connecting the Cancer Registry and the Vermont Prescription Monitoring System to the VHIE). VITL also reported that the delays in signing other agreements resulted in a reduction in the number of completed activities (e.g., fewer interfaces were developed) and certain projects being completed later than expected (e.g., the event notification system was delayed four months).

State officials chalked up the delays to difficulties in receiving federal approval.

As for measuring DVHA's measuring of VITL's performance over the previous two fiscal years, the State Auditor concluded that agreements "contained few performance measures" to assess quality or impact.

"While DVHA’s agreements with VITL did contain quantity measures (how much), there were very few quality measures (how well), and no impact measures (is anyone better off). Further, the state’s current Vermont Health Information Technology Plan (VHITP) does not specify any performance measures for gauging the performance of the VHIE," the report states.

The state's audit reveals that state officials have taken steps to address these deficiencies, including requiring more detailed invoices from VITL which prompted an investigation into the allowability of some costs (conclusion still pending) and the decision by DVHA to fund an impact assessment of VITL's work

Ultimately, the audit concludes that the states is in no position to determine the functioning of the clinical data warehouse nor measure the performance of VITL in developing HIE services that have a positive impact on improving care quality and reducing care costs.

"Without quantifiable performance measures, the State’s ability to judge VITL’s efforts and gauge success is significantly inhibited," it closed.

Given the uncertainty surrounding health information exchange activities, the state of healthcare interoperability in Vermont remains problematic.
 


Health dept extends Datacom outsourcing deal for $160m
By Justin Henry June 25, 2020

Link to original article

Two more years.

The federal Department of Health has extended its IT outsourcing deal with Datacom for a further two years amid the ongoing coronavirus pandemic.

The department handed the company the two-year extension last month at a cost of $159.7 million, bringing the infrastructure and support services deal to $506.3 million over seven years.

It means the contract, which covers the provision, maintenance and refresh of all hardware and software, has now more than doubled in cost since Datacom scooped the deal from IBM in 2015.

The deal also covers a range of enterprise data warehouse services that the department had previously sourced from Accenture.

The extension follows two additional amendments last year, which added $92.9 million ($67.7 million and $25.2 million) to the cost of the contract.

The larger of the two amendments related to an increase in the department’s consumption of services over the term of the contract.

A spokesperson told iTnews that the latest amendment would see the term of the contract pushed out until 30 June 2022.

“The original term of the contract was set to expire on 30th June 2020. The contract has been extended for two years,” the spokesperson said.

“The Department has chosen to exercise a contract extension option available under the contract.”

However, unlike the two amendments last year, the spokesperson said “no new services have been added as part of the extension”.

When Datacom became the incumbent provider five years ago, it helped shift the department to a contemporary outcomes-based model with consumption-based pricing to reduce annual IT costs.

The transition, which took six months, involved establishing a support capability for the department’s enterprise data warehouse, data centres and 490 servers, according to Datacom.

It followed 15 years with a traditional IT services outsourcing model from IBM – a deal that was renewed six times, including one in which ministerial approval was granted to keep it going.

The department currently has an average staffing level (ASL) of 3800.
 


Data brokers and the implications of data sharing - the good, bad and ugly
By Neil Raden July 19, 2019

Link to original article

Summary: The term "data sharing" is expanding, but in a problematic way that raises flags for companies and consumers alike. Neil Raden provides a deeper context for data sharing trends, dividing them into the good, bad and ugly.

The term "data sharing" has, until recently, referred to scientific and academic institutions sharing data from scholarly research.

The brokering or selling of information is an established industry and doesn't fit this definition of "sharing," but it is popping up. Scholarly data sharing is mostly free of controversy, but all other forms of so-called sharing present some concerns.

Information Resources (IRI), Nielsen and Catalina Marketing have been in the business of collecting data and selling data and applications for decades, but the explosion of computing power, giant network pipelines, cloud storage and, lately AI, is a fertile ground for the creation of literally thousands of data brokers, mostly unregulated and presently a challenge to privacy and fairness:

Currently, data brokers are required by federal law to maintain the privacy of a person's data if it is used for credit, employment, insurance or housing. Unfortunately, this is clearly not scrupulously enforced, and beyond those four categories, there are no regulations (in the US). And while medical privacy laws prohibit doctors from sharing patient information, medical information that data brokers get elsewhere, such as from the purchase of over-the-counter drugs and other health care items, is fair game.

Selling Healthcare Data:

One might assume that your medical records are private and only used for the purposes of your healthcare, but as Adam Tanner writes in How Data Brokers Make Money Off Your Medical Records:

IMS and other data brokers are not restricted by medical privacy rules in the U.S., because their records are designed to be anonymous-containing only year of birth, gender, partial zip code and doctor's name. The Health Insurance Portability and Accountability Act (HIPAA) of 1996, for instance, governs only the transfer of medical information that is tied directly to an individual's identity.

It is a simple process for skilled data miners to combine anonymized and non-anonymized data sources to re-identify people from what is supposed to be protected medical records:

One small step toward reestablishing trust in the confidentiality of medical information is to give individuals the chance to forbid collection of their information for commercial use-an option the Framingham study now offers its participants, as does the state of Rhode Island in its sharing of anonymized insurance claims. "I personally believe that at the end of the day, individuals own their data," says Pfizer's Berger [Marc Berger oversees the analysis of anonymized patient data at Pfizer]. "If somebody is using [their] data, they should know." And if the collection is "only for commercial purposes, I think patients should have the ability to opt out."

There are also legitimate data markets that gather and curate data responsibly. Most notable lately is Snowflake, which I'll cover below. Others are Datamarket.com, which is now part of QLIK, Azure Data Marketplace (Microsoft) and InfoChimps.com.

One I can't get my arms around is Acxiom. They are a $1B business that collects all sort of information about people in 144 million households. Apparently their business is creating profiles so advertisers can target you more accurately. That seems innocent enough, but I don't know if that's the whole story. However, about five years ago, Acxiom launched https://aboutthedata.com/portal which allows you see what data they have about you.

Even more remarkable, you can correct mistakes and you can opt out. According to Acxiom, though, if you do opt out, you can expect to get a lot of ads you're not interested in. Keep in mind, though, that this business is still unregulated, so it would take an investigative reporter to validate these claims.

Then there is this: Acxiom, a huge ad data broker, comes out in favor of Apple CEO Tim Cook's quest to bring GDPR-like regulation to the United States:

In the statement, Acxiom said that it is "actively participating in discussions with US lawmakers" on consumer transparency, which it claims to have been voluntarily providing "for years." Still, the company denied that it partakes in the unchecked "shadow economy" which Cook made reference to in his op-ed.

The good - let's start with data.gov

From Wikipedia: Data.gov is a U.S. government website launched in late May 2009 by the then Federal Chief Information Officer (CIO) of the United States, Vivek Kundra. Data.gov aims to improve public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government. The site is a repository for federal, state, local, and tribal government information, made available to the public. Data.gov has grown from 47 datasets at launch to over 180,000 (actually now over 250,000).

This chart gives a sense of the vastness and variety of free, open and curated data on data.gov:

Don't confuse this with: The Open Data Initiative

The Open Data Initiative (ODI) is a joint effort to securely combine data from Adobe, Microsoft, SAP, and other third-party systems in a customer's data lake. It is based on three guiding principles: - Every organization owns and maintains complete, direct control of all their data - Customers can use AI to get insights from unified behavioral and operational data - Partners can easily leverage an open and extensible data model to extend solutions

ODI is an ambitious effort with admirable goal, but it is not the subject of this article.

There are also legitimate data markets that gather and curate data responsibly. Most notable lately is Snowflake, which I'll cover below. Others are Datamarket.com, which is now part of QLIK, Azure Data Marketplace (Microsoft) and InfoChimps.com.

The bad

Epsilon (recently acquired in April, 2019 for $4.4B) refused to give a congressional committee all the information it requested, saying: "We also have to protect our business, and cannot release proprietary competitive information." information onpeople who are believed to have medical conditions such as anxiety, depression, diabetes, high blood pressure, insomnia, and osteoporosis.

Sprint, T-Mobile, and AT&T said they were taking steps to crack down on the "misuse" of customer location data after an investigation this week found how easy it was for third parties to track the locations of customers. (Misuse? They SOLD the data).

Experian sold Social Security numbers to an identity theft service posing as a private investigator.

The ugly

Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials.Since most of this is covered by HIPPA they are very clever in getting around the regulations. But that socioeconomic thing is real red flag.

What it means, at the very minimum, is the use of the "Social Determinants," income and social status, employment, childhood experiences, gender, genetic endowment. That's just the start. You have to ask yourself, why would anyone want to use this information? Life insurance, car insurance, mortgage, education, adoption, personal liability insurance, health insurance, renting, employment…there is no end to it and you will never know what's in there.

The World PrivacyForum found a list of rape victims for sale. At one data broker, the group found brokers also selling lists of AIDs patients, the home addresses of police officers, a mailing list for domestic violence shelters (which are typically kept secret by law) and a list of people with addictive behaviors towards drug and alcohol.

Tactical Tech and artist Joana Moll purchased 1 million online dating profiles for 136€ from USDate, a supposedly US-based company that trades in dating profiles from all over the globe.

Snowflake's Data Sharing

Snowflake is a cloud-native data warehouse offering. Their secret sauce is the separation of data from logic. So taking Amazon as an example (Snowflake also runs on Google Cloud and Microsoft Azure shortly). Your data will reside in S3, where costs are asymptotically approaching zero, and you basically only pay for processing on EC2. Everything works as a "virtual data warehouse," meaning you create abstractions over the data and nothing moves or is copied. You can have virtually thousands of data warehouses with one copy of the data.

I don't know this sure, but I suspect Snowflake, despite their success, saw the need to create some other technology as data warehouses are a limited market. What they came up with was using their existing technology to provider a mechanism for data providers to locate their data in a Snowflake region, and allow others to "rent" data without copying or downloading it. Beside this obvious productivity and cost-saving, Snowflake added feature for their data sharing product including some level of curation and verification of the data. I get the impression this is still a work in progress.

And, because all access to data is through (virtual) data warehouse views, integration of data sources, reference data and a level of semantic coherence - all qualities of a data warehouse - are there. In contrast to a bucket of bits you can download and wrangle later, this seems like a good idea to me

I asked Justin Langseth, Snowflake's CTO, if he was concerned about criminal, civil or even ethical exposure to Snowflake from the data provided. His e-response was:

Legally no we're just the communication platform, the provider of the data is responsible for their data... but we are looking at some tools that can detect hidden bias in models and data though, so it is an area of interest. Should this be enough of a reason to not have people share data? There's tons of social good that can come from this as well.

The problem with his response is two-fold: First not just the data, but any calculations and modeling a customer will do takes place within Snowflake. Secondly, legal responsibility is an abstract term. You may not be legally responsible, but you may still be charged or sued and have to defend yourself, with uncertain outcome.

Besides all of the issues, I'm wondering how many companies have data someone else would want to buy? If you dig into data lakes, the volume comes from things like log files which would be useless without context and imported data, which may not be resealable anyway. Between data.gov and Google and Facebook et al, is there really a market for this? I'm also thinking about edge data; how would you package that, because the trend is not to bring it back to the cloud (though I still don't understand how you do machine learning at the edge).

Langseth also just posted an article on Medium recently, with The article covers the "hardest" issues data marketplaces will face: 1.Faked and Doctored data 2.Sales of Stolen, Confidential, and Insider Data 3.Piracy by Buyers of Data 4.Big Data can be really Big 5.Data is Fast 6.Data Quality can be Questionable 7.Lack of Metadata Standards

And in conclusion, he asks: So how do IotA, SingularityNET, and Datum address these issues?

Mostly they don't, at least so far. Most of the projects working on decentralized data marketplaces have simply not hit these issues yet as they are just in a test mode on a test network. To the extent they have thought about the trust-oriented issues, most of them propose either a reputation system or a centralized validation authority. Reputation systems for data marketplace are highly prone to Sybil attacks (large #'s of fake accounts colluding), and if you need a centralized authority forever you're defeating the purpose of a decentralized crypto system and may as well do everything the old way.

My take

The battle for privacy is already lost. Once data is out, it's gone. Stemming the flow of current data could eventually dilute the value of the data brokers, but that requires regulation which is unlikely in the USA. To reign in data brokers who exist in the shadows, as opposed to a polluting coal-firing power plants, will require digital enforcement, and for-good trolls sniffing out the bad guys. The only question is, who will pay for the development and operation?
 


DXC books $81M CMS data warehouse support order
By Ross Wilkers | Dec 20, 2017

Link to original article

DXC Technology has won a one-year, $81.6 million task order with the Centers for Medicare and Medicaid Services for enterprise IT services to help operate the main portion of CMS’ data warehouse.

CMS received five offers for the order it awarded via the National Institutes of Health’s $20 billion CIO-SP3 contract vehicle, according to Deltek data.

The company will be responsible for the integrated data repository’s information systems architecture and data models. DXC said in a release it will also carry out extract transform and load, user support, data quality and support functions.

Within the data repository is a Hadoop and Teradata enterprise data warehouse that handles data related to CMS’ program benefits.

Task order work will aim to to ensure that the data and data services provided by the repository for Part A and Part B claims are "payment grade," CMS said in an October 2016 sources sought notice.

CMS defines payment grade as automated validation that the data loaded into the repository is exactly how it was received from the sending source and the installation and enforcement of internal controls to maintain separation of duties.

The agency also determines payment grade based on automated reporting and reconciliation processes in place to confirm the data that was loaded. In most cases CMS expects the reporting to be automated but all reconciliation is manual.

DXC is in the process of separating and merging its U.S. government business into Vencore and KeyPoint Government Solutions to create a new, publicly-traded company.
 


Delaware joins eight-state health care data sharing initiative
By Nick Ciolino - Jun 25, 2018

Link to original article

Delaware is now part of an initiative to share best practices for collecting and using health care data.

The National Governors Association created the project which also includes Arkansas, Colorado, Indiana, Iowa, Minnesota, Vermont and Washington. It seeks to determine the best use for data analytics to inform Medicaid and other state health spending policy.

Dr. Elizabeth Brown is Medical Director of the Delaware Division of Medicaid and Medical Assistance. She says state health officials have set up a data warehouse meant to inform Delaware’s decisions as the state moves from a volume-based to a value-based health system and sets a healthcare spending benchmark. She adds this is an opportunity to share what the First State has learned and get input from other states.

We’re going to take a step back look at all of our data systems, look at what best practices are across the country and make sure we are aligning with those best practices,” said Brown.

As a state where the cost of health care is growing faster than its economy, data plays a large role in Delaware’s health spending policy.

But Brown says it’s important to realize the strengths and weaknesses of the data that’s available.

“And that’s actually one of the reasons that projects like this are so important,” she said. “We are analyzing what we can get out of data, what the questions that can be answered accurately and completely with our data are, and where we need to be thinking outside of just the claims data.”

With the support of the NGA, the state health systems will be sharing data techniques with one another over the next 16 months, but will not share the data itself to protect patient privacy.

About 230,000 Delawareans receive Medicaid.
 


Texas HHSC privacy breach may affect 1.8k individuals
Written by Julie Spitzer | June 20, 2017

Link to original article

The Texas Health and Human Services Commission notified clients after discovering a box containing protected health information outside an unsecured dumpster belonging to a commission eligibility office.

The forms in the box — which included client information of 1,842 people in the Houston area — may have contained information such as names; client numbers; dates of birth; case numbers; phone numbers; mailing addresses; Social Security numbers; health information; and bank account numbers. HHSC is offering those affected by the breach one year of free credit monitoring services, although the agency currently has no evidence that anyone viewed the information, Texas HHSC Assistant Press Officer Kelli Weldon confirmed to Becker's Hospital Review via e-mail.

Ms. Weldon said HHSC is reviewing its processes and procedures for disposing documents that contain private information to prevent this type of incident from occurring in the future.
 


Behind Georgia’s Covid-19 dashboard disaster
The Georgia Department of Public Health saw its reputation scorched as a result of the state’s ridiculed Covid-19 dashboard. But as it turns out, the health department had little control over the troubled site.
BY KEREN LANDMAN -OCTOBER 24, 2020
Research for this story was supported by the Fund for Investigative Journalism.

Link to original article

On Tuesday, April 28, eight days after Brian Kemp sent shock waves nationwide as the first governor to announce he would reopen his state during the pandemic, a quiet storm was brewing over another of Kemp’s decisions. State officials were sending flurries of emails about the previous day’s launch of Georgia’s new Covid-19–tracking dashboard—the primary tool that business owners would use to decide when or whether to reopen, now that they could. The launch was supposed to mark an improvement over the state’s preexisting Covid-19 webpage. But it was not going well.

Nancy Nydam, director of communications for the Georgia Department of Public Health, forwarded to two of her colleagues an email she’d received listing constituents’ complaints about the dashboard: deaths by county and demographic had disappeared; age and gender information had vanished; the color scheme was difficult to see for some readers; numbers on the page contradicted each other. At least one state agency reached out with an urgent need for data that were no longer on the page—an office manager from the Georgia Emergency Management and Homeland Security Agency (GEMA) wanted answers from the health department “like ASAP” to a list of questions about missing demographic information regarding hospitalizations and deaths, as well as some other metrics. “I wanted to see if you guys have the information listed below in an easy to share format?” she wrote.

That day’s hitches were not the first indication of the dashboard’s potential problems; as recently as the weekend before its launch, the state’s lead epidemiologists noted that Dougherty County, where the virus’s scorching arc through low-income Black communities had rendered Albany the city with the second-highest number of Covid-19 cases per capita in America, was absent from the as-yet-unpublished dashboard’s list of “top five” counties.

Nor would the dashboard operate smoothly in the weeks and months to come. That much would become clear both to state officials firing off frantic emails and to bewildered Georgians trying to interpret the dashboard’s data in an attempt to decide whether to visit a restaurant, attend religious services, or send their children to summer camp or daycare.

What remained unclear to the public, however, was who exactly was pulling the strings behind the state’s maligned Covid-19 dashboard. Although by all accounts it would appear that it was operated by the Georgia Department of Public Health, some skeptics felt that the fingerprints of the state’s public-health experts were conspicuously absent from the dashboard bearing the agency’s name.

In May, health department commissioner Dr. Kathleen Toomey abruptly ended an interview with a WABE reporter when he raised a question to that effect.

“Who is making the call about what information the Department of Public Health is displaying on [its data dashboard] page?” reporter Sam Whitehead asked. “Is that being made within your agency?”

“Listen, I’m gonna have to run,” Dr. Toomey responded, in what came across as an almost comical attempt to avoid the question. “I actually can’t answer this right now because I’m getting called by the Governor’s office.”

The answer to Whitehead’s question proved more elusive than it should have. The Atlanta Journal-Constitution reported in July that the health department had not fulfilled any of the dozens of open records requests seeking emails relating to the state’s handling of the Covid-19 pandemic since March. In August, the AJC reported that GEMA had redacted enormous amounts of information from Covid-19–related records requests it had fulfilled—and presented the newspaper with a bill for nearly $33,000 to fulfill additional requests.

Atlanta was able to obtain emails illuminating the inner workings of the state’s Covid-19 dashboard not from the state’s Department of Public Health but from the Governor’s Office of Planning and Budget. Why would the office that handles Kemp’s and the state’s budgetary affairs have been the custodian of emails about what ostensibly belongs in the state health department’s domain? Because that office had outsourced the dashboard to a private company—and had assumed what public-health experts describe as an unusually expansive role in overseeing the project.

A series of open records requests Atlanta filed to the Governor’s Office of Planning and Budget yielded thousands of emails concerning the state’s new Covid-19 dashboard, sent between employees of that office and those of the health department—as well as those of the third-party vendor tasked by that office with creating the dashboard. An examination of those emails revealed the health department had limited input into and no real oversight over the dashboard during its creation and in the months after its launch. Additionally, the sidelining of the health department allowed for errors in the analysis, interpretation, and visualization of the state’s Covid-19 data, while simultaneously costing the state tens of thousands of dollars—and time it did not have to spare.

Other open records requests for emails to and from a different state agency showed that at the same time the Covid-19 dashboard was suffering from very public problems, health department officials were working in collaboration with that agency to create a different dashboard—and that after its launch, they were unsuccessful in their attempts to make its existence widely known.

Furthermore, when the dashboard elicited public outrage, the health department shouldered the blame for errors over which it had no control, damaging the relationship between the agency and the community it serves.

“This is the type of information that you make informed decisions on—decisions that impact millions of people in a jurisdiction,” says Dr. Syra Madad, an infectious-disease epidemiologist and special pathogens preparedness expert in the New York City hospital system, in reference to state-run Covid dashboards. Because the impact of dashboards on those decisions is so outsized, authorities must take great care in determining who oversees them, according to Dr. Madad. “It’s okay to bring in outside individuals or contract with other entities as long as it’s in collaboration,” she says. “But if this [outsourcing of the dashboard] was based on a political decision and not in collaboration with public-health people that actually know what they’re doing, then that’s a recipe for disaster.”

In her April 28 email, Nydam, the health department’s communications director, particularly had been concerned about an inquiry from the AJC in relation to the one-day-old dashboard: “The most pressing is this email from the AJC,” she had written to two health department employees. “Someone must talk to them or we are going to get dragged through the dirt for something that we did not do.”

In response to Atlanta’s detailed questions about the contents of the emails—including why the health department didn’t have more control over the dashboard on its own site and whether its epidemiologists were given enough input into the dashboard—the governor’s press secretary, Cody Hall, only responded: “We are referring comment to DPH here.” When Atlanta pointed out that the questions concerned the actions and decisions of the Governor’s Office of Planning and Budget, Hall would only state: “As the media contact for the Governor’s Office my comment is: ‘I am referring this media request to the Department of Public Health.’”

Similarly detailed questions to the health department were met with this statement from Nydam: “Throughout the COVID-19 pandemic, the Georgia Department of Public Health has worked and continues to work closely with Governor Kemp’s office, the Georgia Department of Community Health and the Georgia Emergency Management and Homeland Security Agency to provide data that is accurate and transparent. We continually review and update features of the dashboard with our vendor . . . to ensure we are providing as complete a picture as possible of COVID-19 in Georgia.”

Several experts on American public-health infrastructure told Atlanta it’s not uncommon for health departments to have a contractual arrangement with a third party to help with certain aspects of data management or with special, time-limited projects like surveys. But it’s unusual to completely outsource a public-health data analysis that shows up on a health department’s site while failing to give the health department oversight of that analysis, says Janet Hamilton, executive director of the Council of State and Territorial Epidemiologists, a nonprofit organization representing public-health epidemiologists. She points out that a state’s team of epidemiologists is uniquely equipped to interpret, analyze, and visualize public-health data.

“That is the job of an epidemiologist, to not just produce a report—a biostatistician can do that—but [to carry out] the ‘ground truthing’ of it,” says Hamilton. That is, tethering the data to real events rather than the projections of policy experts. “It’s just so critical that you do have the right epidemiologists that are leading the efforts and able to see inside the work.”

In Georgia, those epidemiologists existed; they were employed by the Department of Public Health. But they were not leading the efforts.

On Monday, March 16, the novel coronavirus had begun to wreak havoc on Georgians’ lives. The night before, Atlanta mayor Keisha Lance Bottoms had declared a state of emergency, and it was the first day of remote learning for students in many school districts statewide. The Department of Public Health’s daily Covid-19 status report—at that time, a bare-bones page consisting of no more than a case density map of the state, a list of cases by county, and a couple of pie charts—counted 99 cases and one death due to the virus.

That morning, Chavis Paulk, the division director of analytics in Governor Kemp’s Office of Planning and Budget, sent an email introducing himself and his team to Theresa Do, a Washington, D.C.–based epidemiologist and manager at SAS, a data-analysis software and consulting company headquartered in North Carolina. The email mentioned an Excel file containing the details of each suspected Covid-19 infection in the state, which Paulk’s team had just uploaded to a secure server.

It was an innocuous enough introduction, but it opened the door to a protracted and consequential barrage of emails between SAS, the governor’s planning and budget office, and, eventually, the health department.

SAS has been around since the 1960s, when it was known as the Statistical Analysis System, a computer program for analyzing agricultural data. Later incorporated in Raleigh, the company has since evolved into a multinational software and data analysis consulting corporation with more than 14,000 employees. Its software is widely used in health-services research and in public health, including at the Centers for Disease Control and Prevention (CDC); the Morbidity and Mortality Weekly Report—the agency’s flagship publication—often notes use of the company’s software.

The relationship between the governor’s office and SAS was relatively new. In an annually renewable contract initially signed in August 2019, the company agreed to provide software and consulting services to the Governor’s Office of Planning and Budget at a total cost of nearly $3.7 million over five years. But OPB’s director since early 2019, Kelly Farr, who also had worked for Kemp back when the governor was the secretary of state, already knew SAS well: From 2017 to 2019, Farr had worked for the company as an account executive.

The data in the Excel file that the governor’s planning and budget office sent to SAS on March 16 were similar to the data the health department was using to make its own Covid-19 webpage, then only four days old. Over the next six weeks, as the health department continued to maintain its Covid page, the team at SAS would develop an entirely different one using its own software and analysts.

Well before the launch of the SAS dashboard, the Covid-19 webpage managed by the health department had its own problems. As the SAS team worked on its prototype—and as novel coronavirus infections surged in Georgia—the health department scrambled to keep its webpage updated with the flood of information coming its way. Its efforts were complicated by the massive influx of inaccurate and incomplete data pouring in via antiquated reporting processes managed by a decentralized and underfunded public-health system. The effects of these problems only would be amplified once the state’s public-health authorities no longer had control of how Covid-19 data was presented on its own website.

“I have no access to the site and no real awareness of who is responsible for the details behind this . . .” The pressures of a public-health emergency can create intense demand for frequent, real-time reporting that may exceed a health department’s capacity, according to Hamilton, with the Council of State and Territorial Epidemiologists. But when outside data analysts responsible for quality control don’t see a dataset through a public-health lens, the high-pressure environment can lead to errors, she says. “I don’t necessarily want to say that [any errors are] malicious—I think that they’re being driven in part by unrealistic expectations that data is coming in in a way that is much cleaner” than it is, she says.

On April 11, Farr, director of the governor’s planning and budget office, sent an email to Lorri Smith, Governor Kemp’s chief operating officer, and Dr. Toomey, the health-department commissioner, with two links to the SAS team’s work in progress: one with “high level information that could be incorporated as [a] website” and another with “additional information and insights.”

Four days later, health-department epidemiologist Laura Edison responded to an email from Anand Balasubramanian, the governor’s technology advisor, in which he’d asked about “some concerns” she had with the dashboard prototype. “I think this is a great display,” she wrote back, “and just have some nuances to discuss.” In a conference call summarized in a subsequent email, Edison and her colleagues noted that in some places, the dashboard used inappropriate terminology and lacked sufficient explanatory text; in others, key metrics and tables were absent, or existed where they didn’t belong; the graph showing the daily case count did not use shading to indicate a 14-day “pending period” to account for the lag time between a person’s onset of symptoms and the confirmation of their positive test result by the state. SAS epidemiologist Do summarized health-department staffers’ recommendations in a table spanning three pages. (SAS would make nearly all the changes, she wrote.)

But Karl Soetebier, the director of the health department’s informatics office, later would make plain just how little input he’d had into the SAS dashboard.

“My only real involvement to date has been to provide the data to OPB [the governor’s planning and budget office] and a few discussions with the folks from SAS about the data itself,” he wrote to Balasubramanian. “I have no access to the site and no real awareness of who is responsible for the details behind this or what process is needed to have changes made.”

When Kemp announced on Monday, April 20, that he’d soon allow nail salons, hair salons, and bowling alleys—followed by restaurants and movie theaters—to resume serving customers, Georgia did not yet meet the criteria to reopen as set forth in White House guidelines (namely, a downward trajectory of documented cases within a 14-day period). President Trump himself criticized Kemp for reopening the state prematurely. The following weekend, the day after the first businesses reopened their doors, SAS’s Georgia team lead Albert Blackmon wrote to Aaron Cooper with the governor’s planning and budget office and several others, saying: “I know that there is a desire to go live with the site very soon.”

Blackmon acknowledged minor inconsistencies between SAS’s and the health department’s analyses of the state’s Covid-19 data and noted that, if there were still concerns about SAS’s numbers, his team would need to get on the phone with the health department immediately and attempt to reconcile any discrepancies before SAS’s new dashboard was unveiled.

Two days later, on the morning of Monday, April 27, Kemp’s technology advisor Balasubramanian wrote in an email to his colleagues and to SAS that the governor’s office wanted the SAS dashboard to go live that afternoon. The launch would come a day ahead of schedule—and an hour and 15 minutes in advance of a press conference at which Governor Kemp, with health-department commissioner Dr. Toomey at his side, discussed how restaurants would safely reopen for dine-in customers effective immediately. Kemp also took a few moments to introduce the new data dashboard: “We realized as a team that we can provide a more unified, user-friendly platform for Georgians in every corner of our state.”

The next day, the health department’s Soetebier vented to his higher-up, Dr. Toomey: “As you know we were given a new website for the public yesterday for which we have had little input on to date and for which we no longer have direct control.” He also made clear that SAS should take responsibility for any dashboard problems. “I have asked them to own the ongoing list of issues that are identified with the dashboard and to commit to reviewing their progress on them with us regularly,” he wrote.

The public reaction to the dashboard was negative and swift. An AJC article two days after the dashboard’s launch noted that it “confused ordinary Georgians as they decide whether Gov. Brian Kemp was right to begin reopening the state’s [businesses]” and was “making it difficult for the public to determine if Georgia is meeting a key White House criteria for reopening.”

“A lot of people are now accusing us of trying to hide data and/or misrepresenting . . .” Three days after the dashboard’s launch, Megan Andrews, the health department’s director of government relations, forwarded a roundup of constituent complaints to SAS’s Blackmon, asking for assistance in responding to the concerns expressed in the constituents’ emails.

Blackmon replied, “We will get you answers ASAP.” Four days later, Andrews’s deputy, Emily Jones, sent a follow-up email: “We are really in need of some answers for constituents,” she wrote on May 4. “A lot of people are now accusing us of trying to hide data and/or misrepresenting, so getting them information quickly is important.”

Particularly worrying to Jones was the concern several constituents had raised about perceived manipulation of the data to artificially show a decrease in cases. They “believe that these graphs are intentionally designed to show a downward trend and are wondering if a better explanation of the methodology can be given,” she wrote.

SAS’s Blackmon seemed to think the existing explanation on the dashboard was enough: “There is a clear asterisk under the chart” explaining that the last 14 days in the chart may be missing cases, he wrote. “That is what I have been telling people,” replied Jones, “but I wanted to make you aware that we are getting several of these inquiries a day.”

The next Saturday, May 9, a Twitter user called out an egregious graphic on the dashboard. “I’m sorry but I have to curse your twitter feeds with this nightmare graph from @GaDPH,” she tweeted. “The X axis shows dates, BUT not in chronological order for some godforsaken reason.” In an attached image captured from the dashboard, cases descended from left to right, at first glance suggesting a downward trend as time progressed—but as the out-of-order dates indicated, time was not actually progressing but jumping all over the place.

Behind Georgia’s Covid-19 dashboard disaster A screenshot of the chart published on Georgia’s Covid-19 dashboard in May that falsely showed a decrease in the state’s infections—by rearranging the order of the dates at the bottom. Other Twitter users were quick to speculate about the explanation for the chart’s unusual configuration: “Oh, we know the reason. A clear attempt to make the data say what they want it to say, rather than just letting it speak,” wrote one. Journalists also were perplexed: “Only in Brian Kemp’s Georgia is the first Thursday in May followed immediately by the last Sunday in April,” a Washington Post columnist quipped. Pete Corson of the AJC tweeted that the graphic had been “the subject of much head scratching” at his publication.

In a response to Corson, Kemp’s director of communications, Candice Broce, implied the health department was to blame: “The graph was supposed to be helpful,” she tweeted, “but was met with such intense scorn that I, for one, will never encourage DPH to use anything but chronological order on the x axis moving forward.”

Over the next two weeks, a volley of errors emerged from the dashboard: A chart showing Covid-19 cases by race mistakenly included a diagnosis date in 1970, making it unreadable; the total case number inadvertently included—then abruptly expunged—231 serology test results, resulting in a confusing decrease in positive cases between reporting periods; and data points went missing from charts depicting individual counties’ daily case numbers.

A May 19 AJC article explored multiple explanations for the mistakes, quoting Broce as saying of the health department: “We are not selecting data and telling them how to portray it, although we do provide information about constituent complaints, check it for accuracy, and push them to provide more information if it is possible to do so.” Although the story noted a Kemp aide had blamed “a software vendor” for the widely ridiculed nonchronological graph, it did not give further detail on the extent or nature of the vendor’s responsibility.

The next morning, the Columbus Ledger-Enquirer reported that the dashboard’s misstep with the serology tests “artificially lowers the state’s percentage of positive tests.” (Emails indicate that the dashboard’s errors stemming from the tests were due to the health department misclassifying them. “This is not a technical issue per se with the website,” Soetebier wrote to his health-department colleagues and Kemp’s chief management officer, Caylee Noggle.)

Amid the fresh wave of public rancor in the wake of the story, the health department’s Edison warned in an email the next day to Noggle that “by rushing through data analyses, we run the risk of making errors.” Edison proposed that a clarifying footnote be added to the dashboard. “It takes time to work through these complicated and far from perfect data.”

Four hours later, Edison sent Noggle and several other Kemp staffers a four-page data FAQ of sorts to post to the site. Balasubramanian, the governor’s technology advisor, forwarded the document to the SAS team with a request to post it to the website—but two minutes later, he walked that request back: “Hold on, don’t POST,” he wrote. “Please review and let me know if you have any suggestions.” (Kemp staffers later stripped almost all of the explanatory content from the data FAQ the health department team had written.)

At a May 21 press conference, Kemp addressed some of the public derision related to the dashboard. Citing his administration’s commitment to transparency and honesty, he praised Dr. Toomey and the health department: “They are taking massive amounts of data from all sources, putting them into accessible format under a global spotlight, all at breakneck speed,” he said. “Please afford them some patience, and please steer clear of personal attacks.”

But Kemp did not mention his own team’s role in creating much of the pressure the health department was under, nor the fact that some of the highest-profile mistakes had not been the health department’s errors at all.

“It’s a fair point that it could look like we’re ‘moving the goalposts’ . . .” Emails also show that when health-department staffers sought potential fixes with SAS, their requests were not treated with a sense of urgency. In early June, Leslie Onyewuenyi, a newly hired interim director of informatics who was brought on to work above Soetebier and improve data quality at the health department, asked SAS’s Blackmon for a 30-minute call to review SAS’s quality-control process.

“I don’t believe that there is a need for a call unless [the health department’s] Karl [Soetebier] would like for us to convene,” Blackmon responded.

“We need a high level overview of process flow on your end,” Onyewuenyi wrote back. “Are there any quality control checks on your end before the data is published? The aim of this exercise is to reduce the risk of publishing inaccurate data whether from DPH side or from your end.”

After Onyewuenyi appeared not to get a response to this email or to a follow-up one he sent three days later reiterating his request, the governor’s planning and budget office intervened to set up a call between Onyewuenyi and Blackmon, noting that Blackmon was on vacation.

“We’ll respond on email first,” a SAS project manager wrote. “We can then follow-up as needed.”

At around the same time, Balasubramanian forwarded to SAS a media question that had been sent to the health department about a county map: Why was the threshold for a county to be shaded red—indicating the highest case rates in the state—changing from day to day?

SAS responded by forwarding an explanation from one of its systems engineers: “It’s a fair point that it could look like we’re ‘moving the goalposts’, it might be something we could revisit.” But the method behind the color-coding would remain unchanged until, more than a month later, a viral tweet pointed to it as an example of how the health department “is violating data visualization best practices in a way that’s hiding the severity of the outbreak.”

Trent Smith, a senior external communications specialist with SAS, responded to a series of Atlanta’s questions about its work on Georgia’s Covid-19 dashboard by stating: “We can’t share customer names without their permission.” Smith also wrote: “SAS has been used for decades in public-health departments, from local to state to national governments and is currently in all U.S. state health departments.”

As the health department was publicly battered for mistakes over which it had little control, its leadership was well aware of the need to improve the dashboard and the magnitude of the fallout from its problems. On the Fourth of July, after reviewing examples of other states’ data dashboards, Dr. Toomey asked health-department staff to request that the SAS team add certain metrics to the dashboard and noted the negative public perception of her agency: “I am getting complaints from the public as well as other officials that we are deliberately not being transparent.”

Some of the state’s public-health experts felt Georgians deserved Covid-19 analysis and insight beyond what the SAS dashboard ultimately offered, and they tried— unsuccessfully—to offer that info on the health department’s site.

Back in March, at the same time SAS began what would be a six-week effort to build its dashboard, a team from another government agency was creating other Covid-19 dashboards for internal use.

Susan Miller, who leads the Georgia Geospatial Information Office (GIO), began working on maps to assist other state agencies in allocating pandemic-related resources in March. Her team used a product made by the California-based company Esri. No other mapping platform on the market is as “comprehensive, holistic, or stable,” as Esri, she says. (Miller worked for the company as a product engineer in the early 2000s.)

In mid-April, the health department’s Edison asked Miller’s team to create a report aimed at providing the governor—and, possibly, the general public—with the data Georgians would need to decide when it was safe to reopen for business. She asked if these could take the same form as one of the internal dashboards the team already had created.

Once Miller’s team got started on the project, it took less than a week for a prototype to come together. The GIO’s Esri dashboard, compared with the SAS dashboard, had “increased functionality, such as ZIP code level data, death demographics by county/zip, and downloadable data,” wrote the health department’s Edison in an email to Miller and colleagues at other agencies on April 28, one day after the SAS dashboard launched. “I do not think the SAS Dashboard has the functionality that the ESRI one has and I think they can be used in tandem to complement each other.”

Esri’s software and the use of its consulting services weren’t free: The contract Miller’s parent agency signed with Esri’s Disaster Response Program in May totaled $265,000. But those dollars went toward Esri’s work on multiple mapping projects for a variety of agencies.

Health-department officials were hopeful about sharing the Esri dashboard on their agency’s website. “My goal is at a minimum to make this accessible from a link on the page,” Soetebier wrote on April 29 to health-department colleague Edison and staffers at GIO and Esri, “though we should be able to get a new page put together to properly house it.”

On May 13, Edison forwarded to Miller an announcement about a CDC partnership with Esri aimed at enabling all states—at no cost to them—to build or enhance data dashboards using the software. The next day, Edison exclaimed in an email to Miller, Soetebier, and an Esri employee: “We have some traction!” She wrote that two people from the governor’s office “are going to pitch the dashboard!”

But the Esri dashboard would not end up being included or even noted anywhere on the health department’s site. It was published on the GIO’s Covid-19 website, but it wasn’t publicized until Miller’s office published a blog post about it three months later, in mid-August—and even then, the existence of the dashboard remained largely unheralded for several more weeks.

Eventually, one government agency would find value in the multiple Esri dashboards Miller’s team had produced and published on GIO’s Covid-19 website. In September, GEMA replaced its daily Covid-19 situation report with that website, calling it “a one-stop shop for all of the data in a format that is more easily accessible.”

At the most concrete level, the problems with the state’s Covid-19 dashboard made it unreliable as a tool for Georgians simply trying to figure out how to safely go about their lives. As Georgia planned to reopen its doors for business in late spring, the health department fielded an onslaught of questions and complaints from people confused about how to interpret what they were seeing on the dashboard. The lead pastor at a church in Cobb County wrote for assistance understanding how rampant the virus was locally in the hopes of helping his church determine when to reopen for in-person worship. The assistant superintendent of a school district south of Seattle requested an explanation of conflicting case numbers in the hopes of advocating to reopen his own state; “I would like my state open, and Georgia serves as a bellwether,” he wrote. “Please explain the data so that I can advocate correctly and not put my community at risk.”

In the next three months, Georgians celebrated Memorial Day and the Fourth of July, and Governor Kemp squashed mayors’ efforts to enact local mask mandates and other protective measures. Also in that time, more than 155,000 Georgians were infected with the novel coronavirus, of whom 2,551 died.

Beginning in late July, the dashboard stopped attracting as much negative attention as it had early on. Although two public-health experts recently told Atlanta they would like to see additional data on the dashboard, such as case information by zip code and information related to school outbreaks, public outrage over the dashboard’s appearance has largely ebbed.

But public-health experts say the damage to the health department’s reputation caused by the dashboard’s pattern of problems may have lasting effects. In a statewide survey the health department conducted in late July, only 55 percent of respondents perceived the agency as credible. Amber Schmidtke, a volunteer advisor to the state’s Covid-19 Data Task Force who until recently was an assistant professor of microbiology at Mercer University in Macon, recalled several fumbled efforts at transparency on the state’s Covid dashboard, concluding: “So, yeah, I think it does harm people’s trust.”

Melanie Thompson, an Atlanta doctor and researcher who coauthored two July letters protesting Kemp’s handling of the pandemic that were signed by thousands of healthcare workers, says the contents of her inbox made the public’s loss of faith plain: “The emails and things that I got from a variety of people made me feel that there is no trust in the governor to do the right thing scientifically,” she says, “and that extends to the Department of Public Health, because [its] commissioner basically serves at the pleasure of the governor and does not contradict him at all.”

When public trust in an institution is sufficiently eroded, it can be hard to recover, says Joseph Capella, a specialist in health communication at the University of Pennsylvania’s Annenberg School for Communication. “It’s the old idea of poisoning the well,” he says. When public-health institutions lose credibility as a consequence of one misstep, he says, the resulting lack of trust can impact their ability to effectively carry out other public-health activities, like vaccine distribution.

Clarity about who’s doing the work on state websites is important, too, says Laura Harker, a senior policy analyst at the Georgia Budget and Policy Institute. When a consulting company’s work is presented on an agency’s website, “having that made clear somewhere—at least the name on the bottom of who the outside contractor is, or some type of contact information for the data managers—is always, I think, important to have for transparency purposes,” she says.

The state of Georgia slashed the health department’s epidemiology budget during the lean years of the recession—from $6 million annually in 2009 to less than $4 million in 2011—and that budget was never fully restored. Georgia’s public-health funding lags well below the national average.

“People are thinking that public health has failed society,” says Dr. Madad, the New York City–based epidemiologist and preparedness expert. “No. Society has failed public health because we didn’t invest and see the value of it. And we’re seeing the consequences today.”

The early chaos of the Covid-19 dashboard shows how Georgia squandered the chance to shine a light on the merits and necessity of a public-health department, says Thompson. “This was an opportunity for DPH to shine, . . . to come into its own, and to really teach the public what public health is all about, to really engender trust.”

On April 28, the day after the SAS dashboard launched, health-department epidemiologist Edison and GIO head Miller exchanged emails about the difficulty of getting the best Covid-19 data to the public and the need for a more collaborative effort among government agencies. “My head is spinning,” Edison wrote. “I just want to share the damn data.”

Miller responded: “We can either feed the real data to Georgians, the country and the world . . . or let them fend for themselves. . . . I will back you on getting the data out until the end of time!!!!”
 


Massive Health Data Warehouse Delayed Again, A Decade After Texas Pitched It
The Texas Tribune
By Jim Malewitz and Edgar Walters
August 15, 2016

Link to original article

Texas health regulators are starting from scratch in designing a system to store massive amounts of data — after spending millions of dollars trying to roll out a version that’s now been scrapped.

Charles Smith, executive commissioner of the Texas Health and Human Services Commission, said Monday that his agency had recently nixed a $121 million contract to create an Enterprise Data Warehouse, an enormous database that would store a wide range of information about the many programs the agency administers. First funded in 2007, the project was expected to be up and running a few years after.

Because the original design would not link enough programs at the sprawling agency, regulators would essentially start from scratch on a much larger — and therefore more useful — system, Smith told members of the Texas House State Affairs Committee at a hearing on state contracting reform efforts.

"We were in the process of building a two-bedroom, two-bath home," he said, likening the effort to a home construction project. "You get it ready to prep your foundation, and I realize my spouse is pregnant with quadruplets."

The most recent design, which was largely focused on storing data on Medicaid and the Children’s Health Insurance Program "isn’t going to meet the needs of our family," he added.

The update stirred concerns from some lawmakers about the lack of progress on a pricey project with a troubled history.

"Thirty-five million dollars we’ve spent on a project that was supposed to cost $120 million. For that, we have nothing?" asked Rep. Dan Huberty, R-Houston.

"Are we getting back to where we started?" asked Rep. Four Price, R-Amarillo.

Texas has spent $35 million on the project so far, with most coming from federal funds, said Smith, who was appointed to his post in May. About $6 million was tapped from state funds.

Smith did not have an estimate about how much the new, larger project would cost, because those assessments won’t begin until next fall — after the legislative session that begins in January.

He pushed back against suggestions that spending thus far was for naught, noting that the agency — as part of the planning process — had moved to a new software system that would be used in the new data warehouse.

"We’ll go through and develop a plan, and a timeline, and we’ll come back next session with everything we need to obtain through the process," he said.

Since the project was first funded, it has suffered myriad delays, as well as uncertainty about whether the federal government would pitch in with additional funding.

In 2013, the Health and Human Services Commission finally invited private companies to submit proposals for the contract. The next year, state officials chose Truven Health Analytics, a Michigan-based firm, as their tentative winner.

But after a series of contracting scandals at the agency prompted the resignation of several high-ranking officials, the state started over, and in November 2014 asked companies to re-apply for the funding.

Those proposals were due in February 2015, and state officials anticipated the project would begin on Sept. 1 of that year, according to the state’s latest published timeline for the project.

At the time, a spokeswoman for the health commission told the Houston Chronicle that the quality of the project was “more important than the timeline.” The agency nonetheless said it was “still possible” the project would be up and running by the end of 2015.

Smith said his agency needed a warehouse that would give his agency instant access to more data than the scrapped plans accounted for — such as information related to foster care.

"I’m talking to our staff about what is the capacity of our system," he said. "We don’t know how many families are willing and able."

Such concerns come at a time when his agency is growing in size and scope. Three of the state’s five health and human services agencies are consolidating into a single "mega-agency" — a reorganization ordered by state lawmakers in 2015.

The other two agencies, which oversee the state’s foster care system and public health infrastructure, respectively, will be considered for consolidation in 2017. State leaders have said that changing the Health and Human Services Commission’s configuration would streamline services and improve efficiency.

Some lawmakers took heart that Smith had refused to follow through with the warehouse’s original design, calling it a thoughtful approach.

"It sounds like the contract was inadequate," said Rep. Byron Cook, a Corsicana Republican who chairs the State Affairs Committee. "I appreciate that."
 


Problem-plagued Texas data project delayed again
Houston Chronicle
By Brian M. Rosenthal
Tuesday, June 28, 2016

Link to original article

AUSTIN -- Texas state health officials once again are delaying a massive data project that has struggled to get off of the ground for more than a decade.

The state Health and Human Services Commission informed lawmakers Tuesday it was pausing the "Enterprise Data Warehouse" project, a plan for an elephantine database housing dozens of information sets about everything from welfare benefits to Medicaid.

"HHSC and the other Health and Human Services agencies are going through a transformation process..." the commission explained to the lawmakers. "Therefore, we are reevaluating our long-term data needs and want to ensure the best investment of state resources."

In a separate letter to the company that was set to run the project, the state officials said they would "revisit this necessary project after the transformation process has been substantially completed."

The commission said it was canceling the contractor solicitation process altogether, which means that even if officials decide to restart the project, it will be years before a vendor is chosen.

The decision is the latest twist in a project that has experienced an almost-comical series of setbacks and controversies.

First discussed in 2005, the project was envisioned as a way to improve services and spur savings through better data analysis. Lawmakers funded the project in 2007, calling for it to be operational by February of 2009.

Over the years, state budget writers have set aside more than $100 million for the project -- money that could not be used elsewhere -- and spent more than $12 million, mostly on consultants.

After a slew of delays caused by both the state and federal governments, the health commission thought it finally had gotten the project on track in the spring of 2014, when officials began negotiating a contract with Truven Health Analytics of Ann Arbor, Michigan.

Then came the eruption of a contracting scandal over alleged favoritism by commission officials toward another data company, 21CT of Austin. In a meeting in August of 2014, commission lawyer Jack Stick, who already had steered a Medicaid fraud detection project to 21CT, seemed to imply in a meeting that that company could do the Enterprise Data Warehouse for less money than Truven.

Two weeks later, negotiations with Truven were over. The commission blamed the company's asking price and said there had been a leak that led Truven to learn about Stick's comment.

Stick and four other commission officials eventually resigned in connection with the 21CT scandal, and the Medicaid fraud project was canceled.

The data warehouse project was put out for bid again in November of 2014.

This February, the health commission disclosed that Truven once again had emerged as the winning bidder and would be given a $104 million contract -- nearly $35 million less than what was being discussed in 2014, said the spokesman, Bryan Black.

"The Health and Human Services Commission is excited the contract is signed and we are moving forward," Black said in February.

The fate of the contract may have shifted when former Executive Commissioner Chris Traylor retired last month. His replacement, Charles Smith, opted for the new approach, records show.




International News - Health Data



Date last updated: Dec-19-2018

Send corrections or suggestions

Copyright 2020 · EHDP Home Page