US Health Data News



 
Datacom 160 million - Health dept extends Datacom outsourcing deal for $160m
 
Data Brokers / Diginomica - Data brokers and the implications of data sharing
 
DXC / CMS - DXC books $81M CMS data warehouse support order
 
DE 8-State - Delaware joins eight-state health care data sharing initiative
 
Digital Health Care - The Digital Health Care Environment
 
DSHS Loss - 2 officials sacked for not reporting loss of birth records ...
 
Geneca - Doomed From the Start?
 
GA DPH Grants - Georgia Department Of Public Health Awarded Grants
 
Harvard IT Risks - Why Your IT Project May Be Riskier Than You Think
 
HHS 2019-11 Fine - Texas health department to pay $1.6M for HIPAA violations
 
HHS Protect from CPI - New Secretive Data System Shaping Federal Pandemic Response
 
IBM 100 IT - IBM Announces $100 Million Health IT Program
 
Governance Key - Governance key to creating effective health data warehouse
 
Kaiser Apps - Analysis: App-Happy Health Care Full of Optimism Money
 
UITALD - Interactive Tools to Assess the Likelihood of Death
 
MA DW - Massachusetts Senate passes $1.7B bond bill
 
McKinsey 2012 - Delivering large-scale IT projects on time ...
 
Michigan DW - Michigan saves $1 million per business day with data warehouse
 
Medicare Use - Feds to allow use of Medicare data to rate doctors hospitals and other health care providers
 
Most Expensive - 10 Most Expensive Hospitals in the U.S.
 
NAPHSIS FOD - NAPHSIS Releases New Fact of Death Query Service
 
NYT COVID 2022-02-20 - The C.D.C. Isn’t Publishing Large Portions of the Covid Data It Collects
 
Premier-IBM - IBM and the Premier healthcare alliance to integrate nation's healthcare data
 
SAS Disaster - Behind Georgia’s Covid-19 dashboard disaster
 
SC PHG - S.C. public health group gets $11.25 million grant
 
TX HHSC #1 - Problem-plagued Texas data project delayed again
 
TX HHSC #2 - Massive Health Data Warehouse Delayed Again
 
TX HHSC #3 - Texas HHSC privacy breach may affect 1.8k individuals
 
UPMC - $100 Million Investment in Sophisticated Data Warehouse and Analytics
 
Veritas / TR - Veritas to Buy Thomson Reuters Health Care Data Management Line
 
VT Data Warehouse - Audit Questions Health Information Exchange Oversight in VT
 



The C.D.C. Isn’t Publishing Large Portions of the Covid Data It Collects

The agency has withheld critical data on boosters, hospitalizations and, until recently, wastewater analyses.

By Apoorva Mandavilli Feb. 20, 2022

Link to original article

For more than a year, the Centers for Disease Control and Prevention has collected data on hospitalizations for Covid-19 in the United States and broken it down by age, race and vaccination status. But it has not made most of the information public.

When the C.D.C. published the first significant data on the effectiveness of boosters in adults younger than 65 two weeks ago, it left out the numbers for a huge portion of that population: 18- to 49-year-olds, the group least likely to benefit from extra shots, because the first two doses already left them well-protected.

The agency recently debuted a dashboard of wastewater data on its website that will be updated daily and might provide early signals of an oncoming surge of Covid cases. Some states and localities had been sharing wastewater information with the agency since the start of the pandemic, but it had never before released those findings.

Two full years into the pandemic, the agency leading the country’s response to the public health emergency has published only a tiny fraction of the data it has collected, several people familiar with the data said.

Much of the withheld information could help state and local health officials better target their efforts to bring the virus under control. Detailed, timely data on hospitalizations by age and race would help health officials identify and help the populations at highest risk. Information on hospitalizations and death by age and vaccination status would have helped inform whether healthy adults needed booster shots. And wastewater surveillance across the nation would spot outbreaks and emerging variants early.

Without the booster data for 18- to 49-year-olds, the outside experts whom federal health agencies look to for advice had to rely on numbers from Israel to make their recommendations on the shots.

Kristen Nordlund, a spokeswoman for the C.D.C., said the agency has been slow to release the different streams of data “because basically, at the end of the day, it’s not yet ready for prime time.” She said the agency’s “priority when gathering any data is to ensure that it’s accurate and actionable.”

Another reason is fear that the information might be misinterpreted, Ms. Nordlund said.

Dr. Daniel Jernigan, the agency’s deputy director for public health science and surveillance said the pandemic exposed the fact that data systems at the C.D.C., and at the state levels, are outmoded and not up to handling large volumes of data. C.D.C. scientists are trying to modernize the systems, he said.

“We want better, faster data that can lead to decision making and actions at all levels of public health, that can help us eliminate the lag in data that has held us back,” he added.

The C.D.C. also has multiple bureaucratic divisions that must sign off on important publications, and its officials must alert the Department of Health and Human Services — which oversees the agency — and the White House of their plans. The agency often shares data with states and partners before making data public. Those steps can add delays.

“The C.D.C. is a political organization as much as it is a public health organization,” said Samuel Scarpino, managing director of pathogen surveillance at the Rockefeller Foundation’s Pandemic Prevention Institute. “The steps that it takes to get something like this released are often well outside of the control of many of the scientists that work at the C.D.C.”

The performance of vaccines and boosters, particularly in younger adults, is among the most glaring omissions in data the C.D.C. has made public.

Last year, the agency repeatedly came under fire for not tracking so-called breakthrough infections in vaccinated Americans, and focusing only on individuals who became ill enough to be hospitalized or die. The agency presented that information as risk comparisons with unvaccinated adults, rather than provide timely snapshots of hospitalized patients stratified by age, sex, race and vaccination status.

But the C.D.C. has been routinely collecting information since the Covid vaccines were first rolled out last year, according to a federal official familiar with the effort. The agency has been reluctant to make those figures public, the official said, because they might be misinterpreted as the vaccines being ineffective.

Ms. Nordlund confirmed that as one of the reasons. Another reason, she said, is that the data represents only 10 percent of the population of the United States. But the C.D.C. has relied on the same level of sampling to track influenza for years.

Some outside public health experts were stunned to hear that information exists.

“We have been begging for that sort of granularity of data for two years,” said Jessica Malaty Rivera, an epidemiologist and part of the team that ran Covid Tracking Project, an independent effort that compiled data on the pandemic till March 2021.

A detailed analysis, she said, “builds public trust, and it paints a much clearer picture of what’s actually going on.”

Concern about the misinterpretation of hospitalization data broken down by vaccination status is not unique to the C.D.C. On Thursday, public health officials in Scotland said they would stop releasing data on Covid hospitalizations and deaths by vaccination status because of similar fears that the figures would be misrepresented by anti-vaccine groups.

“We are at a much greater risk of misinterpreting the data with data vacuums, than sharing the data with proper science, communication and caveats,” Ms. Rivera said.

When the Delta variant caused an outbreak in Massachusetts last summer, the fact that three-quarters of those infected were vaccinated led people to mistakenly conclude that the vaccines were powerless against the virus — validating the C.D.C.’s concerns.

But that could have been avoided if the agency had educated the public from the start that as more people are vaccinated, the percentage of vaccinated people who are infected or hospitalized would also rise, public health experts said.

“Tell the truth, present the data,” said Dr. Paul Offit, a vaccine expert and adviser to the Food and Drug Administration. “I have to believe that there is a way to explain these things so people can understand it.”

Knowing which groups of people were being hospitalized in the United States, which other conditions those patients may have had and how vaccines changed the picture over time would have been invaluable, Dr. Offit said.

Relying on Israeli data to make booster recommendations for Americans was less than ideal, Dr. Offit noted. Israel defines severe disease differently than the United States, among other factors.

“There’s no reason that they should be better at collecting and putting forth data than we were,” Dr. Offit said of Israeli scientists. “The C.D.C. is the principal epidemiological agency in this country, and so you would like to think the data came from them.”

It has also been difficult to find C.D.C. data on the proportion of children hospitalized for Covid who have other medical conditions, said Dr. Yvonne Maldonado, chair of the American Academy of Pediatrics’s Committee on Infectious Diseases.

The academy’s staff asked their partners at the C.D.C. for that information on a call in December, according to a spokeswoman for the A.A.P., and were told it was unavailable.

Ms. Nordlund pointed to data on the agency’s website that includes this information, and to multiple published reports on pediatric hospitalizations with information on children who have other health conditions.

The pediatrics academy has repeatedly asked the C.D.C. for an estimate on the contagiousness of a person infected with the coronavirus five days after symptoms begin — but Dr. Maldonado finally got the answer from an article in The New York Times in December.

“They’ve known this for over a year and a half, right, and they haven’t told us,” she said. “I mean, you can’t find out anything from them.”

Experts in wastewater analysis were more understanding of the C.D.C.’s slow pace of making that data public. The C.D.C. has been building the wastewater system since September 2020, and the capacity to present the data over the past few months, Ms. Nordlund said. In the meantime, the C.D.C.’s state partners have had access to the data, she said.

Despite the cautious preparation, the C.D.C. released the wastewater data a week later than planned. The Covid Data Tracker is updated only on Thursdays, and the day before the original release date, the scientists who manage the tracker realized they needed more time to integrate the data.

“It wasn’t because the data wasn’t ready, it was because the systems and how it physically displayed on the page wasn’t working the way that they wanted it to,” Ms. Nordlund said.

The C.D.C. has received more than $1 billion to modernize its systems, which may help pick up the pace, Ms. Nordlund said. “We’re working on that,” she said.

The agency’s public dashboard now has data from 31 states. Eight of those states, including Utah, began sending their figures to the C.D.C. in the fall of 2020. Some relied on scientists volunteering their expertise; others paid private companies. But many others, such as Mississippi, New Mexico and North Dakota, have yet to begin tracking wastewater.

Utah’s fledgling program in April 2020 has now grown to cover 88 percent of the state’s population, with samples being collected twice a week, according to Nathan LaCross, who manages Utah’s wastewater surveillance program.

Wastewater data reflects the presence of the virus in an entire community, so it is not plagued by the privacy concerns attached to medical information that would normally complicate data release, experts said.

“There are a bunch of very important and substantive legal and ethical challenges that don’t exist for wastewater data,” Dr. Scarpino said. “That lowered bar should certainly mean that data could flow faster.”

Tracking wastewater can help identify areas experiencing a high burden of cases early, Dr. LaCross said. That allows officials to better allocate resources like mobile testing teams and testing sites.

Wastewater is also a much faster and more reliable barometer of the spread of the virus than the number of cases or positive tests. Well before the nation became aware of the Delta variant, for example, scientists who track wastewater had seen its rise and alerted the C.D.C., Dr. Scarpino said. They did so in early May, just before the agency famously said vaccinated people could take off their masks.

Even now, the agency is relying on a technique that captures the amount of virus, but not the different variants in the mix, said Mariana Matus, chief executive officer of BioBot Analytics, which specializes in wastewater analysis. That will make it difficult for the agency to spot and respond to outbreaks of new variants in a timely manner, she said.

“It gets really exhausting when you see the private sector working faster than the premier public health agency of the world,” Ms. Rivera said.
 



Delivering large-scale IT projects on time, on budget, and on value
October 1, 2012 | Article
By Michael Bloch, Sven Blumberg, and Jürgen Laartz

Link to original article

Large IT efforts often cost much more than planned; some can put the whole organization in jeopardy. The companies that defy these odds are the ones that master key dimensions that align IT and business value.

As IT systems become an important competitive element in many industries, technology projects are getting larger, touching more parts of the organization, and posing a risk to the company if something goes wrong. Unfortunately, things often do go wrong. Our research, conducted in collaboration with the University of Oxford, suggests that half of all large IT projects—defined as those with initial price tags exceeding $15 million—massively blow their budgets. On average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted. Software projects run the highest risk of cost and schedule overruns.

These findings—consistent across industries—emerged from research recently conducted on more than 5,400 IT projects2 by McKinsey and the BT Centre for Major Programme Management at the University of Oxford. After comparing budgets, schedules, and predicted performance benefits with the actual costs and results, we found that these IT projects, in total, had a cost overrun of $66 billion, more than the GDP of Luxembourg. We also found that the longer a project is scheduled to last, the more likely it is that it will run over time and budget, with every additional year spent on the project increasing cost overruns by 15 percent.

Staggering as these findings are, most companies survive the pain of cost and schedule overruns. However, 17 percent of IT projects go so bad that they can threaten the very existence of the company. These unpredictable high-impact events—“black swans” in popular risk parlance—occur significantly more often than would be expected under a normal distribution. Large IT projects that turn into black swans are defined as those with budget overruns of more than 200 percent (and up to 400 percent at the extreme end of the spectrum). Such overruns match or surpass those experienced by black swans among complex construction projects such as tunnels and bridges. One large retailer started a $1.4 billion effort to modernize its IT systems, but the project was eventually abandoned. As the company fell behind its competitors, it initiated another project—a new system for supply-chain management—to the tune of $600 million. When that effort failed, too, the retailer had to file for bankruptcy.

Four ways to improve project performance

So how do companies maximize the chances that their IT projects deliver the expected value on time and within budget? Our surveys of IT executives indicate that the key to success lies in mastering four broad dimensions, which combined make up a methodology for large-scale IT projects that we call “value assurance.” The following elements make up this approach:

- focusing on managing strategy and stakeholders instead of exclusively concentrating on budget and scheduling

- mastering technology and project content by securing critical internal and external talent

- building effective teams by aligning their incentives with the overall goals of projects

- excelling at core project-management practices, such as short delivery cycles and rigorous quality checks

According to survey responses, an inability to master the first two dimensions typically causes about half of all cost overruns, while poor performance on the second two dimensions accounts for an additional 40 percent of overspending.

1. Managing strategy and stakeholders

IT initiatives too often pay little heed to strategy and stakeholders and manage projects purely according to budget and schedule targets. The perils are illustrated by one bank’s transformation effort, in which its finance department became involved only a few months before the system was due to go live. This led to several complex changes in the accounting modules as a result of a recently introduced performance-management system. Coming so late in the day, the changes delayed the launch by more than three months, at a cost of more than $8 million.

Top-performing projects, on the other hand, establish a clear view of the initiative’s strategic value—one that goes beyond the technical content. By building a robust business case and maintaining focus on business objectives along the whole project timeline, successful teams can avoid cost overruns. They can also, for example, ensure faster customer response times, obtain higher-quality data for the marketing organization, or reduce the number of required manual processes.

High-performing project teams also improve the ways in which a company manages its internal and external stakeholders, such as business and IT executives, vendors, partners, and regulators. They make sure the project aligns with the company’s overarching business strategy and undertake detailed analyses of stakeholder positions. Project leaders continually engage with all business unit and functional heads to ensure genuine alignment between business needs and the IT solutions being developed.

Good stakeholder management involves foresight when it comes to selecting vendors and negotiating contracts with them. Company negotiators should proactively identify potential risks and, for instance, expand their focus beyond unit price and seek to establish “win–win” agreements. Doing so can help ensure that the company has preferential access to the vendor’s best talent for an extended period of time.

Some companies have learned this the hard way. A bank in the Middle East negotiated hard for price with a vendor and later suffered at the hands of an inexperienced vendor team. Another bank scored well on unit price with a software-package provider for the project phase of a trading-system implementation but encountered high costs for changes and support after the system was introduced and the bank was locked into the new technology.

2. Mastering technology and content

Drawing on expert help as needed, high-performing teams orchestrate all technical aspects of the project, including IT architecture and infrastructure, functionality trade-offs, quality assurance, migration and rollout plans, and project scope. The right team will understand both business and technical concerns, which is why companies must assign a few high-performing and experienced experts for the length of the program. We estimate that the appropriate experts can raise performance by as much as 100 percent through their judgment and ability to interpret data patterns.

One common pitfall occurs when teams focus disproportionately on technology issues and targets. A bank wanted to create a central data warehouse to overcome inconsistencies that occurred among its business-unit finance data, centralized finance data, and risk data. However, the project team focused purely on developing the IT-architecture solution for the data warehouse instead of addressing the end goal, which was to handle information inconsistencies. As a result, the project budget ballooned as the team pursued architectural “perfection,” which involved the inclusion of unneeded data from other systems. This added huge amounts of unnecessary complexity. With milestones and launch dates constantly being pushed back and investments totaling almost $10 million, the bank stopped the project after 18 months.

In contrast, one public-sector institution was able to rescope and simplify its IT project’s technical requirements even though most stakeholders believed doing so was impossible. To eliminate waste and to focus on the items that represented the greatest business value, the team introduced lean3 techniques. At the same time, it established rigorous testing and rollout plans to ensure quality and introduced clearly defined stage gates. Through these and other actions, the team was able to check 95 percent of all test cases, fix critical defects, and verify the fixes before continuing from the unit test phase to integration testing.

3. Building effective teams

Large projects can take on a life of their own in an organization. To be effective and efficient, project teams need a common vision, shared team processes, and a high-performance culture. To build a solid team, members should have a common incentive structure that is aligned with the overall project goal, in contrast with individual work-stream goals. A business-to-technology team that is financially aligned with the value-delivery targets will also ensure that all the critical change-management steps are taken and that, for example, communications with the rest of the organization are clear, timely, and precise.

To ensure the smooth start-up of new front-end and core systems that more than 8,000 people would use, one company team launched a massive—and successful—change-management program. The program included a regular newsletter, desktop calendars that highlighted key changes and milestones, and quarterly town-hall meetings with the CEO. The team made sure all top business-unit leaders were involved during the user-acceptance phase. The company included at least one change agent on each team. These agents received training that instilled a clear understanding of the benefits of the IT change. The actions helped the company to verify that it had the required business capabilities in place to make full use of the technology being implemented and that it could deliver the business value expected in the overall project business case.

4. Excelling at core project-management practices

To achieve effective project management, there’s no substitute for tested practices. These include having a strategic and disciplined project-management office and establishing rigorous processes for managing requirements engineering and change requests. The project office should establish a few strong stage gates to ensure high-quality end products. At the same time, it needs to strive for a short delivery life cycle to avoid creating waste in the development process.

One public-sector organization established strong project control by defining an initiative’s scope in an initial six-month phase and making sure all stakeholders signed off on the plan. Beyond this phase, the organization’s board had to approve all change requests, and the project was given a pre-defined cost-overrun buffer of less than $2 million. Another organization, a high-tech company, established clear quality criteria for a project master plan, which mandated that teams break down all activities so that they required fewer than 20 person-days to complete and took no longer than four weeks.

In yet another case, instead of following a “waterfall”4 or linear approach, a company created integrated business and IT teams that worked on an end-to-end basis in their respective work streams. In other words, the teams participated from the beginning of the project to its completion—from defining requirements to testing. This approach helps to avoid misunderstandings during stage transitions and ensures clear responsibility and ownership. It also promotes efficiency gains and fast delivery.

Assessing the black-swan risk

The high rate of failure makes it wise to analyze prospects before starting a large IT project.

Companies usually begin with a diagnostic to determine the status of their key projects and programs—both finalized and existing projects (to understand company-specific problems) and planned projects (to estimate their true cost and duration). This diagnostic determines two conditions: the health of a project from the standpoint of the four dimensions of the value-assurance methodology and its relative prospects when compared with the outcomes of a reference class5 of similar projects.

In another case, an organization used a broader and more interview-driven diagnostic approach to identify critical improvement areas. The organization had recently experienced failures that led it to make a commitment to reform IT and drive fundamental improvement in IT project delivery. The diagnostic helped it realize that the major hurdle to creating a well-defined business case was the limited availability of funding during the prestudy phase. The study also revealed that the organization’s inability to arrive at a stable and accurate project scope resulted from the infrequent communication between project managers and stakeholders about issues such as new requirements and change requests, which led to deviations from the original scope.

The value-assurance approach has a solid track record. One large public-sector organization, for example, replaced about 50 legacy IT systems with a standard system for enterprise resource planning over the course of three years—within budget and on schedule—even though analysis of projects of this size and duration had indicated an expected budget overrun in the range of $80 million to $100 million. Similarly, a global insurance company used the approach to consolidate its IT infrastructure over 18 months, delivering the project on time and within budget and realizing savings of about $800 million a year.

Large-scale IT projects are prone to take too long, are usually more expensive than expected, and, crucially, fail to deliver the expected benefits. This need not be the case. Companies can achieve successful outcomes through an approach that helps IT and the business join forces in a commitment to deliver value. Despite the disasters, large organizations can engineer IT projects to defy the odds.

ABOUT THE AUTHOR(S) Michael Bloch is a director in McKinsey’s Tel Aviv office, Sven Blumberg is an associate principal in the Düsseldorf office, and Jürgen Laartz is a director in the Berlin office.
 


Posted Jul 14, 2020 at 5:00 PM

Link to original article

Sen. Cindy Friedman, D-Arlington, recently joined her colleagues in passing a $1.7 billion General Government Bond Bill focused on capital improvements to improve government infrastructure, empower communities disproportionately impacted by the criminal justice system, support early education and care providers with safe reopening during the COVID-19 pandemic and expand equitable access to remote learning opportunities for vulnerable populations across the commonwealth.

Building on the Senate’s efforts to address issues of racial equity and support communities of color, the bond bill authorizes $50 million in new economic empowerment and community reinvestment capital grants to support communities disproportionately impacted by the criminal justice system with access to economic and workforce development opportunities.

Friedman successfully secured a $2.5 million technology investment authorization to automate the Criminal Offender Record Information, or CORI, system for sealing criminal records. Under the current system, sealing a criminal record can take months — meanwhile employers, landlords, bankers and others turn people away from employment, housing and financing opportunities based on minor or old incidents that appear on CORIs.

“Our antiquated CORI system is just one example of how our system continues to disproportionally impact people of color,” said Friedman. “Now more than ever, we should be investing in the things that strengthen our communities, support our most vulnerable residents and help people restart their lives rather than penalize them for life. I’m pleased that these funds were authorized in this bill, and am grateful for my Senate colleagues for moving this important piece of legislation forward.”

In addition to empowering economically disadvantaged communities, the Senate’s bond bill authorizes capital investments to ensure accountability in public safety and modernize criminal justice data collection by providing $20 million for a body camera grant program for police departments and $10 million for a statewide criminal justice data system modernization to help better track racial and ethnic disparities across the judicial and public safety systems.

To ensure equitable access to remote learning opportunities and safe access to early child care opportunities, the Senate bond bill authorizes $50 million to enhance and expand access to K-12 remote learning technology and provides $25 million to assist licensed early education and care providers and after school programs with capital improvements to ensure safe reopening during the COVID-19 public health emergency.

The bill also addresses growing food insecurity and food supply chain needs across the commonwealth due to the COVID-19 pandemic, by authorizing $37 million for a food security grant program to address infrastructure needs for farms, retailers, fisheries, food system businesses and food distribution channels.

Additional components of the bond bill include:

• $140 million for cybersecurity upgrades to improve the commonwealth’s technology and telecommunications infrastructure.

• $115 million for municipal library improvements.

• $100 million for governmental performance and informational technology infrastructure upgrades.

• $30 million for public higher education public safety grants.

• $25 million for fire safety equipment grants.

• $20 million for municipal broadband access grants.

• $5 million for the development of a common application for MassHealth enrollees to more easily access the federal Supplemental Nutrition Assistance Program.

• $2.9 million for a public health data warehouse to track population health trends, such as COVID-19.

• $2.5 million for implementation of an automated electronic sealing process to seal certain criminal records. The bill returns to the Massachusetts House of Representatives where a similar bill has passed. The Senate expects differences between the two versions to be resolved quickly.
 




Link to original article

HHS Protect, at the center of health agency clashes, was created after the CDC’s long struggle to modernize.

New, secretive data system shaping federal pandemic response Introduction A long history of data frustration Under wraps September 22, 2020 Liz Essley Whyte Reporter INTRODUCTION The Center for Public Integrity is a nonprofit newsroom that investigates betrayals of public trust. Sign up to receive our stories.

As deadly Ebola raged in Africa and threatened the United States, the Centers for Disease Control and Prevention pinpointed a problem: The agency had many sources of data on the disease but no easy way to combine them, analyze them on a single platform and share the information with partners. It was using several spreadsheets and applications for this work — a process that was “manual, labor-intensive, time-consuming,” according to the agency’s request for proposals to solve the problem. It spent millions building a new platform.

THIS CONTENT IS AVAILABLE FOR REPUBLISHING We ask that you credit our newsroom at the top with a line that says, “This article was originally published by the Center for Public Integrity, a nonprofit investigative news organization based in Washington, D.C.” and link to our homepage. Photo rights not included.

But at the beginning of the coronavirus pandemic, the CDC still struggled to integrate and share data. The system it had built during the Ebola crisis wasn’t up to the task. An effort to modernize all of the agency’s data collection and analysis was ongoing: One CDC official told a congressional committee in March that if the agency had modern data infrastructure, it would have detected the coronavirus “much, much sooner” and would have contained it “further and more effectively.”

By April, with coronavirus cases spiking in the U.S. and officials scrambling to wrangle information about the pandemic, the CDC had a proof-of-concept for a new system to pull together all of its various data streams. But it was having trouble figuring out how to securely add users outside the agency, as well as get the funding and political backing needed to expand it, according to two sources with close knowledge of the situation.

So the CDC turned to outsiders for help. Information technology experts at the federal Department of Health and Human Services took control of the project. Five days later, they had a working platform, dubbed HHS Protect, with the ability to combine, search and map scores of datasets on deaths, symptoms, tests, ventilators, masks, local ordinances and more.

The new, multimillion-dollar data warehouse has continued to grow since then; it holds more than 200 datasets containing billions of pieces of information from both public and private sources. And now, aided by artificial intelligence, it is shaping the way the federal government addresses the pandemic, even as it remains a source of contention between quarreling health agencies and a target for transparency advocates who say it’s too secretive.

The Center for Public Integrity is the first to reveal details about how the platform came to be and how it is now being used. Among other things, it helps the White House and federal agencies distribute scarce treatment drugs and supplies, line up patients for vaccine clinical trials, and dole out advice to state and local leaders. Federal officials are starting to use a $20 million artificial intelligence system to mine the mountain of data the platform contains.

People familiar with HHS Protect say it could be the largest advance in public health surveillance in the United States in decades. But until now it has been mostly known as a key example of President Trump’s willingness to sideline CDC scientists: In July, his administration suddenly required hospitals to send information on bed occupancy to the new system instead of the CDC.

The Trump administration has added to the anxiety surrounding HHS Protect by keeping it wrapped in secrecy, refusing to publicly share many of the insights it generates.

“I want to be optimistic that everything is happening here is actually a net improvement,” said Nick Hart, CEO of the Data Coalition, a nonprofit that advocates for open government data. “The onus is really on HHS to explain what’s happening and be as transparent as possible… It’s difficult to assess whether it really is headed in the right direction.”

A LONG HISTORY OF DATA FRUSTRATION

To hear some tell it, the reason behind the CDC’s long struggle to upgrade its data systems can be learned in its name: the Centers — plural — for Disease Control and Prevention. Twelve centers, to be exact, and a jumble of other offices, each with its own expertise and limited funding: the National Center for Immunization and Respiratory Diseases, for example, or the Center for Preparedness and Response. Scientists at each myopically focus on their own needs and strain to work together on expensive projects to benefit all, such as upgrading shared data systems, experts familiar with the CDC said. A 2019 report from the Council of State and Territorial Epidemiologists found that the agency had more than 100 stand-alone, disease-specific tracking systems, few of them able to talk to each other, let alone add in outside data that could help responders stanch outbreaks.

“CDC has been doing things a certain way for decades,” said a person familiar with the creation of HHS Protect who was not authorized to speak on the record. “Sometimes epidemiologists are not technologists.”

The U.S. government knew for more than a decade it needed a comprehensive system to collect, analyze and share data in real time if a pandemic reached America’s shores. The 2006 Pandemic and All-Hazards Preparedness Act directed federal health officials to build such a system; in 2010 the Government Accountability Office found that they hadn’t. A 2013 version of the law required the same thing; in 2017 the GAO found again that it hadn’t happened. Congress passed another law in 2019 calling for the system yet again. In 2020 the coronavirus struck.

“We’ve had no shortage of events that have demonstrated the importance of bringing together both healthcare and public health information in a usable, deeply accessible platform,” said Dr. Dan Hanfling, a vice president at In-Q-Tel, a nonprofit with ties to the CIA that invests in technology helpful to the government. “We’ve missed the mark.”

In fighting a pandemic, the nation struggles with data at every turn: from collecting information about what’s happening on the ground, to analyzing it, to sharing it to sending information back to the front lines. The CDC still relies on underfunded state health departments using antiquated equipment — even fax machines — to gather some types of information. The agency for years has also had ongoing, formal efforts to upgrade its data processes.

“There’ve been a lot of false starts in this area,” said Dr. Tom Frieden, the head of the CDC during the Obama administration. Frieden blamed money already spent on existing systems and local governments unwilling to make changes, among other reasons. “We had decades of underinvestment in public health at the national, state and local levels, and that includes information systems.”

“The way to make Americans safer is to build on, not bypass, our public health system,” says Dr. Tom Frieden, head of the CDC during the Obama administration. (Vital Strategies) The CDC attempted to fix at least some of those problems — joining and analyzing and sharing data from disparate sources — with the system it built during Ebola, known as DCIPHER. The system saved the agency thousands of hours of staff time as it responded to a salmonella outbreak and lung injuries from vaping. But it couldn’t keep up with the coronavirus. It was stored on CDC servers instead of the cloud and couldn’t handle the flood of extra data and users needed to fight COVID-19, according to two sources with knowledge of the situation.

So CDC officials handed the proof-of-concept for a new system to the chief information officer of HHS, Jose Arrieta. The CDC was having trouble figuring out how to approve and ensure the identities of new users from outside the agency, such as the White House Coronavirus Task Force, and give them appropriate permissions to view data, according to two sources with close knowledge of the situation. Arrieta and his team solved the technical problems, stitching together eight pieces of commercial software to build the platform and pulling in data from both private and public sources, including the CDC.

“Our goal was to create the best view of what’s occurring in the United States as it relates to COVID-19,” said Arrieta, a career civil servant who has worked for both Republicans and Democrats, speaking for the first time since his sudden departure from HHS in August. He said, and a friend confirmed, that he left his job primarily to spend more time with his young children after months of round-the-clock work. “It changes public health forever.”

HHS Protect now helps federal agencies distribute testing supplies and the scarce COVID-19 treatment drug remdesivir, identify coronavirus patients for vaccine clinical trials, write secret White House Coronavirus Task Force reports sent to governors, determine how often nursing homes must test their staffs for infection, inform the outbreak warnings White House adviser Dr. Deborah Birx has been issuing to cities in private phone calls — and more.

The system allows users to analyze, visualize and map information so they can, for example, see how weakening local health ordinances could affect restaurant spending and coronavirus deaths in mid-size cities across America. Arrieta’s team assembled the platform from eight pieces of commercial software, including one purchased via sole-source contracts worth $24.9 million from Palantir Technologies, a controversial company known for its work with U.S. intelligence agencies and founded by Trump donor Peter Thiel. CDC used the Palantir software for both the HHS Protect prototype and DCIPHER, and it works well, Arrieta said; contracting documents cited the coronavirus emergency when justifying the quick purchase.

And now a new artificial intelligence component of the platform, called HHS Vision, will help predict how particular interventions, such as distributing extra masks in nursing homes, could stanch local outbreaks. Arrieta said HHS Vision, which is not run with Palantir software, uses pre-written algorithms to simulate behaviors and forecast possible outcomes using what experts call “supervised machine learning.”

Though many of the datasets in HHS Protect are public, a scientist who wanted to use them would have to hunt for them from many agencies, clean them and help them relate to one another. That work is already done in HHS Protect.

“It is a big leap forward,” said Dr. Wilbert van Panhuis, an epidemiologist at the University of Pittsburgh who is working to get access to the platform for a group of 600 researchers. “They are making major progress in this pandemic.”

But the new system became a source of controversy this summer when officials told hospitals to stop reporting information on beds and patients to a well-known and revered CDC system, the National Healthcare Safety Network, and instead send it to Teletracking, a private contractor connected to HHS Protect. Observers feared the move undermined science and was another example of political interference with the CDC’s work. In August, hospital bed data from Teletracking sometimes diverged wildly from what states were reporting, though now it aligns more closely, said Jessica Malaty Rivera, science communication lead for the Covid Tracking Project, a volunteer organization compiling pandemic data.

“If there’s one major lesson we have from emergencies in the last 20 years… it’s not to try to create a new system but take the most robust system you have and scale it,” Frieden said. “The way to make Americans safer is to build on, not bypass, our public health system.”

Some familiar with the switch from the CDC to Teletracking said it allowed the federal government to compile more data on more hospitals. It happened, they said, because the White House task force members asked for more hospital information to prepare for the winter. Teletracking was able to start collecting extra data from hospitals in a matter of days, while the CDC said it would take weeks to make those changes.

“Our goal was to create the best view of what’s occurring in the United States as it relates to COVID-19.”

JOSE ARRIETA, FORMER CHIEF INFORMATION OFFICER OF HHS

A CDC official familiar with the situation disputed those claims, saying that the National Healthcare Safety Network provided excellent data without overburdening already-stressed hospitals. Making the switch to HHS Protect, he said, is “like taking a veteran team off the field to replace that team with rookies. You get a lot of rookie mistakes.”

The hospital data dust-up aside, some CDC officials remain skeptical of HHS Protect.

“It is a platform. It isn’t a panacea,” said a CDC official familiar with the system who didn’t want his name published because he wasn’t authorized to speak to the media. Some of the outside data sources HHS Protect depends on — including the hospital data from Teletracking — aren’t reliable, the official said, sometimes showing, for example, that a hospital had a negative number of patients in beds. “We’re seeing enough of it to warrant overall big-time concerns about the hospital data quality.”

Some are also concerned about the system’s ability to guard patient privacy: More than a dozen lawmakers sent a letter to HHS Secretary Alex Azar in July questioning how HHS Protect would protect individuals’ privacy.

But officials say HHS Protect contains no personal information on patients or others. It tracks users’ every interaction with the data and blocks them from datasets they don’t have authority to see, allowing the federal government to guard privacy and prevent data manipulation, sources familiar with the system said.

UNDER WRAPS

The Trump administration adopted data principles in 2018 that include promoting “transparency… to engender public trust.” But much of the data in HHS Protect remains off limits to the public, glimpsed only in leaked reports and occasional mentions by White House task force members. The platform’s public web portal displays the hospital bed data that caused so much controversy this summer but little else. Observers of all stripes, from Frieden to the conservative Heritage Foundation, have called for the Trump administration to make more of its data public.

Van Panhuis said HHS Protect clearly was designed with federal government users in mind, not academic researchers or the public.

“It’s a bit disappointing,” he said. “Currently we have to invent that part of the system.”

Basic data about the pandemic contained in HHS Protect remains secret and is sometimes obscured even from local public health officials. The White House task force’s secret recommendations to governors use HHS Protect data on cities’ test positivity rates, but the White House does not release those reports. And that national dataset is still nowhere to be found on any federal website. When asked, an HHS spokesperson could not point to it.

Some secrecy surrounding HHS Protect data exists for good reason, officials said: Some private companies share their data with HHS on the condition that it will be used to respond to the public health crisis and not be revealed to competitors. And releasing some of the data, even though they contain no personal information, could trigger privacy concerns, forcing officials to redact some of it. For example, it might become obvious whose symptoms were being described in data from a small, rural county with one hospital and one coronavirus patient.

But the secrecy around HHS Protect frustrates transparency advocates who want government data to be shared more openly.

Ryan Panchadsaram, who helps run the coronavirus data website Covid Exit Strategy, would like HHS Protect to publish in one location information on cases, test results and other metrics, for every city and county in the U.S., in an easily accessible and downloadable format.

“Making it available to the public shouldn’t be that difficult,” he said. “It’s a political and policy decision.”

People looking for county-level information — to make decisions about whether to visit grandparents, for example — are often out of luck. And if they want a one stop-shop for state-level data, they must turn to private sources: Panchadsaram said that even employees of state and federal agencies visit Covid Exit Strategy for information on the coronavirus. The state of Massachusetts uses his site’s data to decide which travelers must quarantine when they arrive.

“It is shocking that they come to us when the data is sitting in its purest form” in HHS Protect, he said.

Federal officials, attempting to deliver on at least some transparency promises, say they are working to set up congressional staffers with logins to HHS Protect. Staffers monitoring the pandemic say they have yet to be granted access, though some states are using the system.

The secrecy surrounding HHS Protect also means that outsiders also can’t evaluate whether the platform is living up to its promise. Despite repeated requests from Public Integrity, HHS and CDC spokespeople did not make any officials available for on-the-record interviews regarding HHS Protect.

“The federal government has an obligation to make as much data and information public as possible,” said Hart, of the Data Coalition. “HHS should consider ways to improve the information it’s providing to the American people.”

Zachary Fryer-Biggs contributed to this report.
 


Harvard Business Review - September, 2011

Link to original article

Why Your IT Project May Be Riskier Than You Think
by Bent Flyvbjerg and Alexander Budzier
From the Magazine (September 2011)

To top managers at Levi Strauss, revamping the information technology system seemed like a good idea. The company had come a long way since its founding in the 19th century by a German-born dry-goods salesman: In 2003 it was a global corporation, with operations in more than 110 countries. But its IT network was antiquated, a balkanized mix of incompatible country-specific computer systems. So executives decided to migrate to a single SAP system and hired a team of Deloitte consultants to lead the effort. The risks seemed small: The proposed budget was less than $5 million. But very quickly all hell broke loose. One major customer, Walmart, required that the system interface with its supply chain management system, creating additional hurdles. Insufficient procedures for financial reporting and internal controls nearly forced Levi Strauss to restate quarterly and annual results. During the switchover, it was unable to fill orders and had to close its three U.S. distribution centers for a week. In the second quarter of 2008, the company took a $192.5 million charge against earnings to compensate for the botched project—and its chief information officer, David Bergen, was forced to resign.

A $5 million project that leads to an almost $200 million loss is a classic “black swan.” The term was coined by our colleague Nassim Nicholas Taleb to describe high-impact events that are rare and unpredictable but in retrospect seem not so improbable. Indeed, what happened at Levi Strauss occurs all too often, and on a much larger scale. IT projects are now so big, and they touch so many aspects of an organization, that they pose a singular new risk. Mismanaged IT projects routinely cost the jobs of top managers, as happened to EADS CEO Noël Forgeard. They have sunk whole corporations. Even cities and nations are in peril. Months of relentless IT problems at Hong Kong’s airport, including glitches in the flight information display system and the database for tracking cargo shipments, reportedly cost the economy $600 million in lost business in 1998 and 1999. The CEOs of companies undertaking significant IT projects should be acutely aware of the risks. It will be no surprise if a large, established company fails in the coming years because of an out-of-control IT project. In fact, the data suggest that one or more will.

We reached this bleak conclusion after conducting the largest global study ever of IT change initiatives. We examined 1,471 projects, comparing their budgets and estimated performance benefits with the actual costs and results. They ran the gamut from enterprise resource planning to management information and customer relationship management systems. Most, like the Levi Strauss project, incurred high expenses—the average cost was $167 million, the largest $33 billion—and many were expected to take several years. Our sample drew heavily on public agencies (92%) and U.S.-based projects (83%), but we found little difference between them and projects at the government agencies, private companies, and European organizations that made up the rest of our sample.

The True IT Pitfall

When we broke down the projects’ cost overruns, what we found surprised us. The average overrun was 27%—but that figure masks a far more alarming one. Graphing the projects’ budget overruns reveals a “fat tail”—a large number of gigantic overages. Fully one in six of the projects we studied was a black swan, with a cost overrun of 200%, on average, and a schedule overrun of almost 70%. This highlights the true pitfall of IT change initiatives: It’s not that they’re particularly prone to high cost overruns on average, as management consultants and academic studies have previously suggested. It’s that an unusually large proportion of them incur massive overages—that is, there are a disproportionate number of black swans. By focusing on averages instead of the more damaging outliers, most managers and consultants have been missing the real problem.

Some of the pitfalls of tech projects are old ones. More than a decade ago, for example, Hershey’s shift to a new order-taking and fulfillment system prevented the company from shipping $100 million worth of candy in time for Halloween, causing an 18.6% drop in quarterly earnings. Our research suggests that such problems are now occurring systematically. The biggest ones typically arise in companies facing serious difficulties—eroding margins, rising cost pressures, demanding debt servicing, and so on—which an out-of-control tech project can fatally compound. Kmart was already losing its competitive position to Walmart and Target when it began a $1.4 billion IT modernization project in 2000. By 2001 it had realized that the new system was so highly customized that maintenance would be prohibitively expensive. So it launched a $600 million project to update its supply chain management software. That effort went off the rails in 2002, and the two projects contributed to Kmart’s decision to file for bankruptcy that year. The company later merged with Sears Holdings, shedding more than 600 stores and 67,000 employees.

Other countries, too, have seen companies fail as the result of flawed technology projects. In 2006, for instance, Auto Windscreens was the second-largest automobile glass company in the UK, with 1,100 employees and £63 million in revenue. Unsatisfied with its financial IT system, the company migrated its order management from Oracle to Metrix and started to implement a Microsoft ERP system. In the fourth quarter of 2010, a combination of falling sales, inventory management problems, and spending on the IT project forced it into bankruptcy. Just a few years earlier the German company Toll Collect—a consortium of DaimlerChrysler, Deutsche Telekom, and Cofiroute of France—suffered its own debacle while implementing technology designed to help collect tolls from heavy trucks on German roadways. The developers struggled to combine the different software systems, and in the end the project cost the government more than $10 billion in lost revenue, according to one estimate. “Toll Collect” became a popular byword among Germans for the woes of their economy.

Software is now an integral part of numerous products—think of the complex software systems in cars and consumer appliances—but the engineers and managers who are in charge of product development too often have a limited understanding of how to implement the technology component. That was the case at Airbus, whose A380 was conceived to take full advantage of cutting-edge technology: Its original design, finalized in 2001, called for more than 300 miles of wiring, 98,000 cables, and 40,000 connectors per aircraft. Partway through the project the global product development team learned that the German and Spanish facilities were using an older version of the product development software than the British and French facilities were; configuration problems inevitably ensued. In 2005 Airbus announced a six-month delay in its first delivery. The following year it announced another six-month delay, causing a 26% drop in share price and prompting several high-profile resignations. By 2010 the company still had not caught up with production plans, and the continuing problems with the A380 had led to further financial losses and reputational damage.

Avoiding Black Swans

Any company that is contemplating a large technology project should take a stress test designed to assess its readiness. Leaders should ask themselves two key questions as part of IT black swan management: First, is the company strong enough to absorb the hit if its biggest technology project goes over budget by 400% or more and if only 25% to 50% of the projected benefits are realized? Second, can the company take the hit if 15% of its medium-sized tech projects (not the ones that get all the executive attention but the secondary ones that are often overlooked) exceed cost estimates by 200%? These numbers may seem comfortably improbable, but, as our research shows, they apply with uncomfortable frequency. Even if their companies pass the stress test, smart managers take other steps to avoid IT black swans. They break big projects down into ones of limited size, complexity, and duration; recognize and make contingency plans to deal with unavoidable risks; and avail themselves of the best possible forecasting techniques—for example, “reference class forecasting,” a method based on the Nobel Prize–winning work of Daniel Kahneman and Amos Tversky. These techniques, which take into account the outcomes of similar projects conducted in other organizations, are now widely used in business, government, and consulting and have become mandatory for big public projects in the UK and Denmark.

As global companies become even more reliant on analytics and data to drive good decision making, periodic overhauls of their technology systems are inevitable. But the risks involved can be profound, and avoiding them requires top managers’ careful attention.

Bent Flyvbjerg is the first BT Professor and inaugural chair of Major Programme Management at the University of Oxford’s Saïd Business School and the Villum Kann Rasmussen Professor and Chair at the IT University of Copenhagen. He is a coauthor (with Dan Gardner) of Big Plans: Why Most Fail, How Some Succeed (forthcoming from Random House).

Alexander Budzier, a consultant at McKinsey & Co., is a doctoral candidate at Saïd.

Copyright © 2022 Harvard Business School Publishing. All rights reserved.
 


Beckers's Hospital Review - November 9, 2019

Link to original article

The Office for Civil Rights at the HHS slapped the Texas Health and Human Services Commission with a $1.6 million fine for HIPAA violations, according to a Nov. 7 news release.

Specifically, the OCR was penalizing the Department of Aging and Disability Services for its data breach in 2015. The department reorganized into the Texas Health and Human Services Commission in September 2017.

In a report to the OCR, the department indicated that the electronic protected health information of 6,617 individuals was accessible online. Patient data that was exposed included names, addresses, Social Security numbers and treatment information.

The department said during the move of an internal application from a private server to a public server a flaw in the software code allowed unauthorized users access to individuals' information. The OCR investigation found that the department failed to conduct and enterprise-wide risk analysis and implement access and audit controls for its information systems and applications.

"Covered entities need to know who can access protected health information in their custody at all times," said the OCR Director Roger Severino. "No one should have to worry about their private health information being discoverable through a Google search."
 


Silver Spring, MD (PRWEB) March 01, 2017

Link to original article

The National Association for Public Health Statistics and Information Systems (NAPHSIS) announced today the release of a new Fact of Death (FOD) Query Service http://www.naphsis.org/evvefod, providing credentialed organizations the ability to quickly, reliably, and securely discover if a death record exists. This service is part of the NAPHSIS Electronic Verification of Vital Events (EVVE) System and is the only service in existence with the ability to match authorized queries against the databases of state or local vital record jurisdictions, where all death records in the nation are stored.

We've been swamped by requests for death data from a variety of industries. Access to complete, timely, and accurate death record data does not currently exist in the United States," says Anthony Stout, manager of EVVE products and services. ?EVVE Fact of Death resolves this problem and can save the country and our customers millions, if not billions, of dollars a year".

Before today, the Social Security Administration (SSA) Death Master File (DMF) https://www.ssa.gov/dataexchange/request_dmf.html was the primary source for death record data. However, its usefulness has been severely hampered since November of 2011, when the SSA was no longer allowed to include state protected death records in the DMF. As a result, millions of death records are missing every year from the DMF, making it woefully incomplete and unusable for many organizations requiring this information to help prevent fraud, protect identities, reduce waste, and streamline business processes.

Currently, there are 37 of 57 states and jurisdictions participating in the EVVE FOD service, allowing credentialed users to match against more than 55 million death records. The number of participating jurisdictions is increasing steadily, and all 57 states and jurisdictions across the nation are working to join the EVVE FOD service as soon as possible.

As death record data includes highly sensitive and personal information, confidentiality and security of such data is of upmost importance. To ensure this service utilizes the highest levels of security possible, NAPHSIS has partnered with LexisNexis® VitalChek Network Inc. (VitalChek) http://vitalcheknetwork.com/ to maintain the EVVE Fact of Death Query Service. VitalChek adheres to all major InfoSec standards such as PCI-DSS, SOC 1, SOC 2, and uses public key / private key encryption technology to ensure incoming requests and outgoing results are secure.

An organization that has a valid need for death record data, and belongs to one of the following current categories, may be credentialed to use EVVE Fact of Death:

Federal-Benefits or Admin State/Local-Benefits or Admin Pension/Retirement Insurance Receivables Financial

Organizations can become credentialed EVVE Fact of Death users by visiting the website at http://www.naphsis.org/evvefod, clicking on the "Get Started Now" link at the bottom of the page and following the prompts. The process is easy, and qualified customers can expect to be using EVVE Fact of Death within a week. There is a minimal per-record price for credentialed private companies and/or government agencies to use the EVVE Fact of Death service. About NAPHSIS: The National Association for Public Health Statistics and Information Systems (NAPHSIS) is the national nonprofit organization representing the state vital records and public health statistics offices in the United States. Formed in 1933, NAPHSIS brings together more than 250 public health professionals from each state, the five territories, New York City, and the District of Columbia. Contact: Anthony N. Stout Manager- EVVE Products and Services 301-563-6005 / evvefod(at)naphsis(dot)org
 


Audit Questions Health Information Exchange Oversight in VT
Health IT Interoperability
By Kyle Murphy, PhD
October 06, 2016

Link to original article

An audit of health information exchange activities in Vermont has yielded more questions than answers about healthcare interoperability in the state.

Officials at the Department of Vermont Health Access (DVHA) tasked with overseeing the development of a statewide health information exchange have drawn criticism from the state's auditor for their oversight of millions of dollars in grants and contracts. Health information exchange in Vermont In a report released late last month, State Auditor Douglas R. Hoffer found DVHA to have fallen short in two areas: evaluating the actions taken by Vermont Information Technology Leaders, Inc. (VITL) — the exclusive operator of the statewide HIE network — and measuring the latter's performance over the previous two fiscal years, FY 2015 and 2016.

According to the report, the state department issued $12.3 million during that time, representing close to one-third of total funding ($38 million) paid to VITL since 2005. Oversight of the VITL HIE contracts and grants fell to both DVHA and the Agency of Administration (AOA).

Deficiencies in oversight have raised doubts about the development of a clinical data warehouse to be used for health data analysis and reporting.

"Although the State assented to VITL building the warehouse, it was not explicitly included in any agreement as a deliverable, nor did the State define its functional and performance requirements. Without such requirements, the State is not in a position to know whether the clinical data warehouse is functioning as it intends," the report states.

Upon closer inspection, the building of a clinical data warehouse casts doubt on the state's handling of agreements with VITL, which according to the state's audit cited unclear language as authorization for the system.

"Even if we accept that this language authorizes the construction of a clinical data warehouse, which we believe is unclear, no evidence was provided to indicate that the State defined the functional and performance requirements of the warehouse," the report reads. "Without such requirements, the State is not in a position to know whether the clinical data warehouse is functioning as it intends."

Uncertainly also extends to the ownership and use of the clinical data warehouse as a lack of explicit language appears to indicate that the state is the licensee of the software used but its ability to make use of the data is restricted by the healthcare organizations providing the information comprising the system.

"Accordingly, VITL contends that the agreements do not currently permit VITL to disclose the personal health information in the warehouse to the State and, therefore, the State does not have any rights to access, use, or disclose this data," the report states.

As it turns out, the case of the clinical data warehouse was a microcosm of a much larger issue of poor programmatic and financial oversight of VITL. DVHA often failed to finalize agreements with VITL prior to project start dates — in five of six agreements. These delays had several consequences:

First, having VITL perform work without a signed agreement inhibited the State’s ability to hold VITL accountable to desired standards because they had not been formally documented and agreed upon. Second, the Green Mountain Care Board reported that delays in finalizing VITL’s contracts resulted in uncertainty about what terms would ultimately be agreed to or omitted, what work should be prioritized, and if and how to allocate staff, contractors, and other resources to various projects. Third, because of the four-month delay in signing contract #30205, VITL and the State agreed to eliminate two required deliverables (connecting the Cancer Registry and the Vermont Prescription Monitoring System to the VHIE). VITL also reported that the delays in signing other agreements resulted in a reduction in the number of completed activities (e.g., fewer interfaces were developed) and certain projects being completed later than expected (e.g., the event notification system was delayed four months).

State officials chalked up the delays to difficulties in receiving federal approval.

As for measuring DVHA's measuring of VITL's performance over the previous two fiscal years, the State Auditor concluded that agreements "contained few performance measures" to assess quality or impact.

"While DVHA’s agreements with VITL did contain quantity measures (how much), there were very few quality measures (how well), and no impact measures (is anyone better off). Further, the state’s current Vermont Health Information Technology Plan (VHITP) does not specify any performance measures for gauging the performance of the VHIE," the report states.

The state's audit reveals that state officials have taken steps to address these deficiencies, including requiring more detailed invoices from VITL which prompted an investigation into the allowability of some costs (conclusion still pending) and the decision by DVHA to fund an impact assessment of VITL's work

Ultimately, the audit concludes that the states is in no position to determine the functioning of the clinical data warehouse nor measure the performance of VITL in developing HIE services that have a positive impact on improving care quality and reducing care costs.

"Without quantifiable performance measures, the State’s ability to judge VITL’s efforts and gauge success is significantly inhibited," it closed.

Given the uncertainty surrounding health information exchange activities, the state of healthcare interoperability in Vermont remains problematic.
 


Health dept extends Datacom outsourcing deal for $160m
By Justin Henry June 25, 2020

Link to original article

Two more years.

The federal Department of Health has extended its IT outsourcing deal with Datacom for a further two years amid the ongoing coronavirus pandemic.

The department handed the company the two-year extension last month at a cost of $159.7 million, bringing the infrastructure and support services deal to $506.3 million over seven years.

It means the contract, which covers the provision, maintenance and refresh of all hardware and software, has now more than doubled in cost since Datacom scooped the deal from IBM in 2015.

The deal also covers a range of enterprise data warehouse services that the department had previously sourced from Accenture.

The extension follows two additional amendments last year, which added $92.9 million ($67.7 million and $25.2 million) to the cost of the contract.

The larger of the two amendments related to an increase in the department’s consumption of services over the term of the contract.

A spokesperson told iTnews that the latest amendment would see the term of the contract pushed out until 30 June 2022.

“The original term of the contract was set to expire on 30th June 2020. The contract has been extended for two years,” the spokesperson said.

“The Department has chosen to exercise a contract extension option available under the contract.”

However, unlike the two amendments last year, the spokesperson said “no new services have been added as part of the extension”.

When Datacom became the incumbent provider five years ago, it helped shift the department to a contemporary outcomes-based model with consumption-based pricing to reduce annual IT costs.

The transition, which took six months, involved establishing a support capability for the department’s enterprise data warehouse, data centres and 490 servers, according to Datacom.

It followed 15 years with a traditional IT services outsourcing model from IBM – a deal that was renewed six times, including one in which ministerial approval was granted to keep it going.

The department currently has an average staffing level (ASL) of 3800.
 


Data brokers and the implications of data sharing - the good, bad and ugly
By Neil Raden July 19, 2019

Link to original article

Summary: The term "data sharing" is expanding, but in a problematic way that raises flags for companies and consumers alike. Neil Raden provides a deeper context for data sharing trends, dividing them into the good, bad and ugly.

The term "data sharing" has, until recently, referred to scientific and academic institutions sharing data from scholarly research.

The brokering or selling of information is an established industry and doesn't fit this definition of "sharing," but it is popping up. Scholarly data sharing is mostly free of controversy, but all other forms of so-called sharing present some concerns.

Information Resources (IRI), Nielsen and Catalina Marketing have been in the business of collecting data and selling data and applications for decades, but the explosion of computing power, giant network pipelines, cloud storage and, lately AI, is a fertile ground for the creation of literally thousands of data brokers, mostly unregulated and presently a challenge to privacy and fairness:

Currently, data brokers are required by federal law to maintain the privacy of a person's data if it is used for credit, employment, insurance or housing. Unfortunately, this is clearly not scrupulously enforced, and beyond those four categories, there are no regulations (in the US). And while medical privacy laws prohibit doctors from sharing patient information, medical information that data brokers get elsewhere, such as from the purchase of over-the-counter drugs and other health care items, is fair game.

Selling Healthcare Data:

One might assume that your medical records are private and only used for the purposes of your healthcare, but as Adam Tanner writes in How Data Brokers Make Money Off Your Medical Records:

IMS and other data brokers are not restricted by medical privacy rules in the U.S., because their records are designed to be anonymous-containing only year of birth, gender, partial zip code and doctor's name. The Health Insurance Portability and Accountability Act (HIPAA) of 1996, for instance, governs only the transfer of medical information that is tied directly to an individual's identity.

It is a simple process for skilled data miners to combine anonymized and non-anonymized data sources to re-identify people from what is supposed to be protected medical records:

One small step toward reestablishing trust in the confidentiality of medical information is to give individuals the chance to forbid collection of their information for commercial use-an option the Framingham study now offers its participants, as does the state of Rhode Island in its sharing of anonymized insurance claims. "I personally believe that at the end of the day, individuals own their data," says Pfizer's Berger [Marc Berger oversees the analysis of anonymized patient data at Pfizer]. "If somebody is using [their] data, they should know." And if the collection is "only for commercial purposes, I think patients should have the ability to opt out."

There are also legitimate data markets that gather and curate data responsibly. Most notable lately is Snowflake, which I'll cover below. Others are Datamarket.com, which is now part of QLIK, Azure Data Marketplace (Microsoft) and InfoChimps.com.

One I can't get my arms around is Acxiom. They are a $1B business that collects all sort of information about people in 144 million households. Apparently their business is creating profiles so advertisers can target you more accurately. That seems innocent enough, but I don't know if that's the whole story. However, about five years ago, Acxiom launched https://aboutthedata.com/portal which allows you see what data they have about you.

Even more remarkable, you can correct mistakes and you can opt out. According to Acxiom, though, if you do opt out, you can expect to get a lot of ads you're not interested in. Keep in mind, though, that this business is still unregulated, so it would take an investigative reporter to validate these claims.

Then there is this: Acxiom, a huge ad data broker, comes out in favor of Apple CEO Tim Cook's quest to bring GDPR-like regulation to the United States:

In the statement, Acxiom said that it is "actively participating in discussions with US lawmakers" on consumer transparency, which it claims to have been voluntarily providing "for years." Still, the company denied that it partakes in the unchecked "shadow economy" which Cook made reference to in his op-ed.

The good - let's start with data.gov

From Wikipedia: Data.gov is a U.S. government website launched in late May 2009 by the then Federal Chief Information Officer (CIO) of the United States, Vivek Kundra. Data.gov aims to improve public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government. The site is a repository for federal, state, local, and tribal government information, made available to the public. Data.gov has grown from 47 datasets at launch to over 180,000 (actually now over 250,000).

This chart gives a sense of the vastness and variety of free, open and curated data on data.gov:

Don't confuse this with: The Open Data Initiative

The Open Data Initiative (ODI) is a joint effort to securely combine data from Adobe, Microsoft, SAP, and other third-party systems in a customer's data lake. It is based on three guiding principles: - Every organization owns and maintains complete, direct control of all their data - Customers can use AI to get insights from unified behavioral and operational data - Partners can easily leverage an open and extensible data model to extend solutions

ODI is an ambitious effort with admirable goal, but it is not the subject of this article.

There are also legitimate data markets that gather and curate data responsibly. Most notable lately is Snowflake, which I'll cover below. Others are Datamarket.com, which is now part of QLIK, Azure Data Marketplace (Microsoft) and InfoChimps.com.

The bad

Epsilon (recently acquired in April, 2019 for $4.4B) refused to give a congressional committee all the information it requested, saying: "We also have to protect our business, and cannot release proprietary competitive information." information onpeople who are believed to have medical conditions such as anxiety, depression, diabetes, high blood pressure, insomnia, and osteoporosis.

Sprint, T-Mobile, and AT&T said they were taking steps to crack down on the "misuse" of customer location data after an investigation this week found how easy it was for third parties to track the locations of customers. (Misuse? They SOLD the data).

Experian sold Social Security numbers to an identity theft service posing as a private investigator.

The ugly

Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials.Since most of this is covered by HIPPA they are very clever in getting around the regulations. But that socioeconomic thing is real red flag.

What it means, at the very minimum, is the use of the "Social Determinants," income and social status, employment, childhood experiences, gender, genetic endowment. That's just the start. You have to ask yourself, why would anyone want to use this information? Life insurance, car insurance, mortgage, education, adoption, personal liability insurance, health insurance, renting, employment…there is no end to it and you will never know what's in there.

The World PrivacyForum found a list of rape victims for sale. At one data broker, the group found brokers also selling lists of AIDs patients, the home addresses of police officers, a mailing list for domestic violence shelters (which are typically kept secret by law) and a list of people with addictive behaviors towards drug and alcohol.

Tactical Tech and artist Joana Moll purchased 1 million online dating profiles for 136€ from USDate, a supposedly US-based company that trades in dating profiles from all over the globe.

Snowflake's Data Sharing

Snowflake is a cloud-native data warehouse offering. Their secret sauce is the separation of data from logic. So taking Amazon as an example (Snowflake also runs on Google Cloud and Microsoft Azure shortly). Your data will reside in S3, where costs are asymptotically approaching zero, and you basically only pay for processing on EC2. Everything works as a "virtual data warehouse," meaning you create abstractions over the data and nothing moves or is copied. You can have virtually thousands of data warehouses with one copy of the data.

I don't know this sure, but I suspect Snowflake, despite their success, saw the need to create some other technology as data warehouses are a limited market. What they came up with was using their existing technology to provider a mechanism for data providers to locate their data in a Snowflake region, and allow others to "rent" data without copying or downloading it. Beside this obvious productivity and cost-saving, Snowflake added feature for their data sharing product including some level of curation and verification of the data. I get the impression this is still a work in progress.

And, because all access to data is through (virtual) data warehouse views, integration of data sources, reference data and a level of semantic coherence - all qualities of a data warehouse - are there. In contrast to a bucket of bits you can download and wrangle later, this seems like a good idea to me

I asked Justin Langseth, Snowflake's CTO, if he was concerned about criminal, civil or even ethical exposure to Snowflake from the data provided. His e-response was:

Legally no we're just the communication platform, the provider of the data is responsible for their data... but we are looking at some tools that can detect hidden bias in models and data though, so it is an area of interest. Should this be enough of a reason to not have people share data? There's tons of social good that can come from this as well.

The problem with his response is two-fold: First not just the data, but any calculations and modeling a customer will do takes place within Snowflake. Secondly, legal responsibility is an abstract term. You may not be legally responsible, but you may still be charged or sued and have to defend yourself, with uncertain outcome.

Besides all of the issues, I'm wondering how many companies have data someone else would want to buy? If you dig into data lakes, the volume comes from things like log files which would be useless without context and imported data, which may not be resealable anyway. Between data.gov and Google and Facebook et al, is there really a market for this? I'm also thinking about edge data; how would you package that, because the trend is not to bring it back to the cloud (though I still don't understand how you do machine learning at the edge).

Langseth also just posted an article on Medium recently, with The article covers the "hardest" issues data marketplaces will face: 1.Faked and Doctored data 2.Sales of Stolen, Confidential, and Insider Data 3.Piracy by Buyers of Data 4.Big Data can be really Big 5.Data is Fast 6.Data Quality can be Questionable 7.Lack of Metadata Standards

And in conclusion, he asks: So how do IotA, SingularityNET, and Datum address these issues?

Mostly they don't, at least so far. Most of the projects working on decentralized data marketplaces have simply not hit these issues yet as they are just in a test mode on a test network. To the extent they have thought about the trust-oriented issues, most of them propose either a reputation system or a centralized validation authority. Reputation systems for data marketplace are highly prone to Sybil attacks (large #'s of fake accounts colluding), and if you need a centralized authority forever you're defeating the purpose of a decentralized crypto system and may as well do everything the old way.

My take

The battle for privacy is already lost. Once data is out, it's gone. Stemming the flow of current data could eventually dilute the value of the data brokers, but that requires regulation which is unlikely in the USA. To reign in data brokers who exist in the shadows, as opposed to a polluting coal-firing power plants, will require digital enforcement, and for-good trolls sniffing out the bad guys. The only question is, who will pay for the development and operation?
 


Doomed From the Start? Why a Majority of Business and IT Teams Anticipate Their Software Development Projects Will Fail
Cision PR Newswire
Mar 14, 2011, 09:00 ET

Link to original article

Up to 75% of Business and IT Executives Anticipate Their Software Projects Will Fail

Geneca study reveals fuzzy business objectives, out-of-sync stakeholders, and excessive rework undermine confidence in project success

OAKBROOK TERRACE, Ill., March 14, 2011 /PRNewswire/ -- Many executives are feeling worn down by confusion around project business objectives and recognize the need for more involvement from business stakeholders. These are the key findings of a new study of approximately 600 business and IT executives published by software development firm, Geneca.

The study, entitled "Doomed From the Start? Why a Majority of Business and IT Teams Anticipate Their Software Development Projects Will Fail" examines why teams continue to struggle to meet the business expectations for their projects. It surveys participants on such topics as requirements definition, accountability, and measuring project success.

"There is no question that the overall survey results shows that our single biggest performance improvement opportunity is to have a more business-centric approach to requirements," states Geneca President & CEO, Joel Basgall. "Unfortunately, poor requirements definition practices have become so common that they're almost tolerated. The gloomy results of this survey really drive this home."

Interestingly, survey responses from IT professionals and their business counterparts are fairly similar, indicating that both groups have many of the same concerns with regard to their projects.

Key survey findings include: Lack of confidence in project success: 75% of respondents admit that their projects are either always or usually "doomed right from the start." Rework wariness: 80% admit they spend at least half their time on rework. Business involvement is inconsistent or results in confusion: 78% feel the business is usually or always out of sync with project requirements and business stakeholders need to be more involved and engaged in the requirements process. Fuzzy business objectives. Only 55% feel that the business objectives of their projects are clear to them. Requirements definition processes do not reflect business need: Less than 20% describe the requirements process as the articulation of business need. Lack of complete agreement when projects are done: Only 23% state they are always in agreement when a project is truly done. "Although most software projects begin with high expectations, this research reminds us that problems usually lurk below the surface right from the start," states Basgall. "The key is to understand what we are seeing and what to do about it."

The survey consisted of 25 closed ended questions and was completed by 596 individuals closely involved in the software development process. This complete study is available online at http://www.genecaresearchreports.com.

About Geneca

Geneca is a custom software development known for predictably delivering great business outcomes. Well known for its use of Getting Predictable(SM), the groundbreaking software requirements practices, Geneca is committed to setting its employees and clients up for success. Learn more about Geneca at www.Geneca.com. Visit Geneca's blog at http://www.gettingpredictable.com.

Copyright © 2022 Cision US Inc.
 


DXC books $81M CMS data warehouse support order
By Ross Wilkers | Dec 20, 2017

Link to original article

DXC Technology has won a one-year, $81.6 million task order with the Centers for Medicare and Medicaid Services for enterprise IT services to help operate the main portion of CMS’ data warehouse.

CMS received five offers for the order it awarded via the National Institutes of Health’s $20 billion CIO-SP3 contract vehicle, according to Deltek data.

The company will be responsible for the integrated data repository’s information systems architecture and data models. DXC said in a release it will also carry out extract transform and load, user support, data quality and support functions.

Within the data repository is a Hadoop and Teradata enterprise data warehouse that handles data related to CMS’ program benefits.

Task order work will aim to to ensure that the data and data services provided by the repository for Part A and Part B claims are "payment grade," CMS said in an October 2016 sources sought notice.

CMS defines payment grade as automated validation that the data loaded into the repository is exactly how it was received from the sending source and the installation and enforcement of internal controls to maintain separation of duties.

The agency also determines payment grade based on automated reporting and reconciliation processes in place to confirm the data that was loaded. In most cases CMS expects the reporting to be automated but all reconciliation is manual.

DXC is in the process of separating and merging its U.S. government business into Vencore and KeyPoint Government Solutions to create a new, publicly-traded company.
 


2 officials sacked for not reporting loss of birth records that could have exposed 1,500 Texans to ID theft
By Robert T. Garrett - Aug 11, 2016 CDT

Link to original article

AUSTIN — The state registrar of vital statistics and her top deputy have been fired for not disclosing several years ago that their office lost a book containing records of 500 Texas births in early 1993.

The missing book — one of about 800 volumes of birth records that state workers assemble each year from data passed on by local registrars — contained sensitive personal information on about 1,500 infants and parents, including the parents' Social Security numbers, Department of State Health Services spokeswoman Carrie Williams said Wednesday.

As far as officials know, though, no one has been harmed or had their identity stolen as a result of the book's disappearance, she said.

"We have no indication of that," Williams said. "We are sending certified letters to the 1,500 people, informing them of the situation and letting them know that we are offering credit monitoring."

The letters will begin going out next week, she said.

State registrar Geraldine R. Harris could not be reached for comment. She wrote on her July 29 dismissal notice: "I disagree and will ... appeal."

Last week, deputy state registrar Lonzo Kerr Jr. was fired after department chief operating officer Ed House rejected Kerr's contention that security is lax around a vault in Austin that contains 120,000 bound volumes of birth and death records.

In a four-page letter, Kerr said he launched an effort to inventory books and images of records that have not been indexed. But he said he got little support from department leaders, who he said have ignored problems "for decades."

House, though, said Kerr was the vital statistics unit manager in charge of security and couldn't "lay responsibility for security failings on others."

Department spokeswoman Williams said the vault is secure, guarded by surveillance cameras and with special access cards required for unit employees.

"We're always scrutinizing ... whether improvements are needed," she said.

However, Harris and her subordinates noticed the book was missing in late 2012 or soon thereafter, Williams said. They failed to inform House, then-commissioner David Lakey, Texas health and human services inspector general Stuart Bowen or the federal Social Security Administration, which Texas has an obligation to notify of potential breaches, she said.

"This should have been reported immediately so we could evaluate it for investigation, but that didn't happen," Williams said.

Speaking of Harris and Kerr, she said: "They were accountable for these records and the security of these records."

The episode recalled a much bigger state government flub in handling sensitive personal information. In April 2011, then-Comptroller Susan Combs revealed that names, addresses and Social Security numbers belonging to 3.5 million Texans were inadvertently placed on a publicly accessible file server in her office and remained there for about a year.

Combs offered individual credit monitoring to the potentially affected people, who were members of two large state pension systems and Texas Workforce Commission beneficiaries. The state spent about $600,000 for the credit monitoring, though there was no evidence of misuse of data or identity theft, The Associated Press reported a year later.

Combs fired an unspecified number of employees associated with the lapse, the AP said.

At the state health department, higher-ups didn't learn of the missing volume until early June, according to an investigative report by Bowen and personnel records that The Dallas Morning News obtained under Texas' open-records law. Williams said department officials immediately notified Bowen, who launched his independent review.

The book, "Volume 45," contained birth records transmitted to Austin in January and February 1993, she said. At the time, there was a 15- to 20-day delay between births and the vital statistics unit's receipt of local birth data, she said. Affected children who survived are now all adults.

A unit employee, program manager Anna "Chris" Guerrero, disclosed the book's disappearance amid a personnel dispute with Harris, the state registrar, according to an Aug. 2 memo by House, the department's chief operating officer.

The vital statistics unit reports to him.

"The fact that you kept documentation pertaining to this incident in your desk drawer for three years indicates that you knew this was a serious matter, and suggests that you were keeping it in reserve to use when it might best suit your needs," House wrote Guerrero.

"Given ... your long tenure in state employment, you knew or should have known that you needed to report the missing book to someone else besides the very supervisor about whom you frequently complained," he said.

House suspended Guerrero without pay for three days.

Books going missing is rare but has happened, department spokeswoman Williams said. In addition to the 1993 births book, a book of deaths recorded in 1949 and another containing 1947 birth records have been missing since the 1970s, she said.

Department commissioner John Hellerstedt has installed Victor Farinelli as acting state registrar.
 


Delaware joins eight-state health care data sharing initiative
By Nick Ciolino - Jun 25, 2018

Link to original article

Delaware is now part of an initiative to share best practices for collecting and using health care data.

The National Governors Association created the project which also includes Arkansas, Colorado, Indiana, Iowa, Minnesota, Vermont and Washington. It seeks to determine the best use for data analytics to inform Medicaid and other state health spending policy.

Dr. Elizabeth Brown is Medical Director of the Delaware Division of Medicaid and Medical Assistance. She says state health officials have set up a data warehouse meant to inform Delaware’s decisions as the state moves from a volume-based to a value-based health system and sets a healthcare spending benchmark. She adds this is an opportunity to share what the First State has learned and get input from other states.

We’re going to take a step back look at all of our data systems, look at what best practices are across the country and make sure we are aligning with those best practices,” said Brown.

As a state where the cost of health care is growing faster than its economy, data plays a large role in Delaware’s health spending policy.

But Brown says it’s important to realize the strengths and weaknesses of the data that’s available.

“And that’s actually one of the reasons that projects like this are so important,” she said. “We are analyzing what we can get out of data, what the questions that can be answered accurately and completely with our data are, and where we need to be thinking outside of just the claims data.”

With the support of the NGA, the state health systems will be sharing data techniques with one another over the next 16 months, but will not share the data itself to protect patient privacy.

About 230,000 Delawareans receive Medicaid.
 


Texas HHSC privacy breach may affect 1.8k individuals
Written by Julie Spitzer | June 20, 2017

Link to original article

The Texas Health and Human Services Commission notified clients after discovering a box containing protected health information outside an unsecured dumpster belonging to a commission eligibility office.

The forms in the box — which included client information of 1,842 people in the Houston area — may have contained information such as names; client numbers; dates of birth; case numbers; phone numbers; mailing addresses; Social Security numbers; health information; and bank account numbers. HHSC is offering those affected by the breach one year of free credit monitoring services, although the agency currently has no evidence that anyone viewed the information, Texas HHSC Assistant Press Officer Kelli Weldon confirmed to Becker's Hospital Review via e-mail.

Ms. Weldon said HHSC is reviewing its processes and procedures for disposing documents that contain private information to prevent this type of incident from occurring in the future.
 


Behind Georgia’s Covid-19 dashboard disaster
The Georgia Department of Public Health saw its reputation scorched as a result of the state’s ridiculed Covid-19 dashboard. But as it turns out, the health department had little control over the troubled site.
BY KEREN LANDMAN -OCTOBER 24, 2020
Research for this story was supported by the Fund for Investigative Journalism.

Link to original article

On Tuesday, April 28, eight days after Brian Kemp sent shock waves nationwide as the first governor to announce he would reopen his state during the pandemic, a quiet storm was brewing over another of Kemp’s decisions. State officials were sending flurries of emails about the previous day’s launch of Georgia’s new Covid-19–tracking dashboard—the primary tool that business owners would use to decide when or whether to reopen, now that they could. The launch was supposed to mark an improvement over the state’s preexisting Covid-19 webpage. But it was not going well.

Nancy Nydam, director of communications for the Georgia Department of Public Health, forwarded to two of her colleagues an email she’d received listing constituents’ complaints about the dashboard: deaths by county and demographic had disappeared; age and gender information had vanished; the color scheme was difficult to see for some readers; numbers on the page contradicted each other. At least one state agency reached out with an urgent need for data that were no longer on the page—an office manager from the Georgia Emergency Management and Homeland Security Agency (GEMA) wanted answers from the health department “like ASAP” to a list of questions about missing demographic information regarding hospitalizations and deaths, as well as some other metrics. “I wanted to see if you guys have the information listed below in an easy to share format?” she wrote.

That day’s hitches were not the first indication of the dashboard’s potential problems; as recently as the weekend before its launch, the state’s lead epidemiologists noted that Dougherty County, where the virus’s scorching arc through low-income Black communities had rendered Albany the city with the second-highest number of Covid-19 cases per capita in America, was absent from the as-yet-unpublished dashboard’s list of “top five” counties.

Nor would the dashboard operate smoothly in the weeks and months to come. That much would become clear both to state officials firing off frantic emails and to bewildered Georgians trying to interpret the dashboard’s data in an attempt to decide whether to visit a restaurant, attend religious services, or send their children to summer camp or daycare.

What remained unclear to the public, however, was who exactly was pulling the strings behind the state’s maligned Covid-19 dashboard. Although by all accounts it would appear that it was operated by the Georgia Department of Public Health, some skeptics felt that the fingerprints of the state’s public-health experts were conspicuously absent from the dashboard bearing the agency’s name.

In May, health department commissioner Dr. Kathleen Toomey abruptly ended an interview with a WABE reporter when he raised a question to that effect.

“Who is making the call about what information the Department of Public Health is displaying on [its data dashboard] page?” reporter Sam Whitehead asked. “Is that being made within your agency?”

“Listen, I’m gonna have to run,” Dr. Toomey responded, in what came across as an almost comical attempt to avoid the question. “I actually can’t answer this right now because I’m getting called by the Governor’s office.”

The answer to Whitehead’s question proved more elusive than it should have. The Atlanta Journal-Constitution reported in July that the health department had not fulfilled any of the dozens of open records requests seeking emails relating to the state’s handling of the Covid-19 pandemic since March. In August, the AJC reported that GEMA had redacted enormous amounts of information from Covid-19–related records requests it had fulfilled—and presented the newspaper with a bill for nearly $33,000 to fulfill additional requests.

Atlanta was able to obtain emails illuminating the inner workings of the state’s Covid-19 dashboard not from the state’s Department of Public Health but from the Governor’s Office of Planning and Budget. Why would the office that handles Kemp’s and the state’s budgetary affairs have been the custodian of emails about what ostensibly belongs in the state health department’s domain? Because that office had outsourced the dashboard to a private company—and had assumed what public-health experts describe as an unusually expansive role in overseeing the project.

A series of open records requests Atlanta filed to the Governor’s Office of Planning and Budget yielded thousands of emails concerning the state’s new Covid-19 dashboard, sent between employees of that office and those of the health department—as well as those of the third-party vendor tasked by that office with creating the dashboard. An examination of those emails revealed the health department had limited input into and no real oversight over the dashboard during its creation and in the months after its launch. Additionally, the sidelining of the health department allowed for errors in the analysis, interpretation, and visualization of the state’s Covid-19 data, while simultaneously costing the state tens of thousands of dollars—and time it did not have to spare.

Other open records requests for emails to and from a different state agency showed that at the same time the Covid-19 dashboard was suffering from very public problems, health department officials were working in collaboration with that agency to create a different dashboard—and that after its launch, they were unsuccessful in their attempts to make its existence widely known.

Furthermore, when the dashboard elicited public outrage, the health department shouldered the blame for errors over which it had no control, damaging the relationship between the agency and the community it serves.

“This is the type of information that you make informed decisions on—decisions that impact millions of people in a jurisdiction,” says Dr. Syra Madad, an infectious-disease epidemiologist and special pathogens preparedness expert in the New York City hospital system, in reference to state-run Covid dashboards. Because the impact of dashboards on those decisions is so outsized, authorities must take great care in determining who oversees them, according to Dr. Madad. “It’s okay to bring in outside individuals or contract with other entities as long as it’s in collaboration,” she says. “But if this [outsourcing of the dashboard] was based on a political decision and not in collaboration with public-health people that actually know what they’re doing, then that’s a recipe for disaster.”

In her April 28 email, Nydam, the health department’s communications director, particularly had been concerned about an inquiry from the AJC in relation to the one-day-old dashboard: “The most pressing is this email from the AJC,” she had written to two health department employees. “Someone must talk to them or we are going to get dragged through the dirt for something that we did not do.”

In response to Atlanta’s detailed questions about the contents of the emails—including why the health department didn’t have more control over the dashboard on its own site and whether its epidemiologists were given enough input into the dashboard—the governor’s press secretary, Cody Hall, only responded: “We are referring comment to DPH here.” When Atlanta pointed out that the questions concerned the actions and decisions of the Governor’s Office of Planning and Budget, Hall would only state: “As the media contact for the Governor’s Office my comment is: ‘I am referring this media request to the Department of Public Health.’”

Similarly detailed questions to the health department were met with this statement from Nydam: “Throughout the COVID-19 pandemic, the Georgia Department of Public Health has worked and continues to work closely with Governor Kemp’s office, the Georgia Department of Community Health and the Georgia Emergency Management and Homeland Security Agency to provide data that is accurate and transparent. We continually review and update features of the dashboard with our vendor . . . to ensure we are providing as complete a picture as possible of COVID-19 in Georgia.”

Several experts on American public-health infrastructure told Atlanta it’s not uncommon for health departments to have a contractual arrangement with a third party to help with certain aspects of data management or with special, time-limited projects like surveys. But it’s unusual to completely outsource a public-health data analysis that shows up on a health department’s site while failing to give the health department oversight of that analysis, says Janet Hamilton, executive director of the Council of State and Territorial Epidemiologists, a nonprofit organization representing public-health epidemiologists. She points out that a state’s team of epidemiologists is uniquely equipped to interpret, analyze, and visualize public-health data.

“That is the job of an epidemiologist, to not just produce a report—a biostatistician can do that—but [to carry out] the ‘ground truthing’ of it,” says Hamilton. That is, tethering the data to real events rather than the projections of policy experts. “It’s just so critical that you do have the right epidemiologists that are leading the efforts and able to see inside the work.”

In Georgia, those epidemiologists existed; they were employed by the Department of Public Health. But they were not leading the efforts.

On Monday, March 16, the novel coronavirus had begun to wreak havoc on Georgians’ lives. The night before, Atlanta mayor Keisha Lance Bottoms had declared a state of emergency, and it was the first day of remote learning for students in many school districts statewide. The Department of Public Health’s daily Covid-19 status report—at that time, a bare-bones page consisting of no more than a case density map of the state, a list of cases by county, and a couple of pie charts—counted 99 cases and one death due to the virus.

That morning, Chavis Paulk, the division director of analytics in Governor Kemp’s Office of Planning and Budget, sent an email introducing himself and his team to Theresa Do, a Washington, D.C.–based epidemiologist and manager at SAS, a data-analysis software and consulting company headquartered in North Carolina. The email mentioned an Excel file containing the details of each suspected Covid-19 infection in the state, which Paulk’s team had just uploaded to a secure server.

It was an innocuous enough introduction, but it opened the door to a protracted and consequential barrage of emails between SAS, the governor’s planning and budget office, and, eventually, the health department.

SAS has been around since the 1960s, when it was known as the Statistical Analysis System, a computer program for analyzing agricultural data. Later incorporated in Raleigh, the company has since evolved into a multinational software and data analysis consulting corporation with more than 14,000 employees. Its software is widely used in health-services research and in public health, including at the Centers for Disease Control and Prevention (CDC); the Morbidity and Mortality Weekly Report—the agency’s flagship publication—often notes use of the company’s software.

The relationship between the governor’s office and SAS was relatively new. In an annually renewable contract initially signed in August 2019, the company agreed to provide software and consulting services to the Governor’s Office of Planning and Budget at a total cost of nearly $3.7 million over five years. But OPB’s director since early 2019, Kelly Farr, who also had worked for Kemp back when the governor was the secretary of state, already knew SAS well: From 2017 to 2019, Farr had worked for the company as an account executive.

The data in the Excel file that the governor’s planning and budget office sent to SAS on March 16 were similar to the data the health department was using to make its own Covid-19 webpage, then only four days old. Over the next six weeks, as the health department continued to maintain its Covid page, the team at SAS would develop an entirely different one using its own software and analysts.

Well before the launch of the SAS dashboard, the Covid-19 webpage managed by the health department had its own problems. As the SAS team worked on its prototype—and as novel coronavirus infections surged in Georgia—the health department scrambled to keep its webpage updated with the flood of information coming its way. Its efforts were complicated by the massive influx of inaccurate and incomplete data pouring in via antiquated reporting processes managed by a decentralized and underfunded public-health system. The effects of these problems only would be amplified once the state’s public-health authorities no longer had control of how Covid-19 data was presented on its own website.

“I have no access to the site and no real awareness of who is responsible for the details behind this . . .” The pressures of a public-health emergency can create intense demand for frequent, real-time reporting that may exceed a health department’s capacity, according to Hamilton, with the Council of State and Territorial Epidemiologists. But when outside data analysts responsible for quality control don’t see a dataset through a public-health lens, the high-pressure environment can lead to errors, she says. “I don’t necessarily want to say that [any errors are] malicious—I think that they’re being driven in part by unrealistic expectations that data is coming in in a way that is much cleaner” than it is, she says.

On April 11, Farr, director of the governor’s planning and budget office, sent an email to Lorri Smith, Governor Kemp’s chief operating officer, and Dr. Toomey, the health-department commissioner, with two links to the SAS team’s work in progress: one with “high level information that could be incorporated as [a] website” and another with “additional information and insights.”

Four days later, health-department epidemiologist Laura Edison responded to an email from Anand Balasubramanian, the governor’s technology advisor, in which he’d asked about “some concerns” she had with the dashboard prototype. “I think this is a great display,” she wrote back, “and just have some nuances to discuss.” In a conference call summarized in a subsequent email, Edison and her colleagues noted that in some places, the dashboard used inappropriate terminology and lacked sufficient explanatory text; in others, key metrics and tables were absent, or existed where they didn’t belong; the graph showing the daily case count did not use shading to indicate a 14-day “pending period” to account for the lag time between a person’s onset of symptoms and the confirmation of their positive test result by the state. SAS epidemiologist Do summarized health-department staffers’ recommendations in a table spanning three pages. (SAS would make nearly all the changes, she wrote.)

But Karl Soetebier, the director of the health department’s informatics office, later would make plain just how little input he’d had into the SAS dashboard.

“My only real involvement to date has been to provide the data to OPB [the governor’s planning and budget office] and a few discussions with the folks from SAS about the data itself,” he wrote to Balasubramanian. “I have no access to the site and no real awareness of who is responsible for the details behind this or what process is needed to have changes made.”

When Kemp announced on Monday, April 20, that he’d soon allow nail salons, hair salons, and bowling alleys—followed by restaurants and movie theaters—to resume serving customers, Georgia did not yet meet the criteria to reopen as set forth in White House guidelines (namely, a downward trajectory of documented cases within a 14-day period). President Trump himself criticized Kemp for reopening the state prematurely. The following weekend, the day after the first businesses reopened their doors, SAS’s Georgia team lead Albert Blackmon wrote to Aaron Cooper with the governor’s planning and budget office and several others, saying: “I know that there is a desire to go live with the site very soon.”

Blackmon acknowledged minor inconsistencies between SAS’s and the health department’s analyses of the state’s Covid-19 data and noted that, if there were still concerns about SAS’s numbers, his team would need to get on the phone with the health department immediately and attempt to reconcile any discrepancies before SAS’s new dashboard was unveiled.

Two days later, on the morning of Monday, April 27, Kemp’s technology advisor Balasubramanian wrote in an email to his colleagues and to SAS that the governor’s office wanted the SAS dashboard to go live that afternoon. The launch would come a day ahead of schedule—and an hour and 15 minutes in advance of a press conference at which Governor Kemp, with health-department commissioner Dr. Toomey at his side, discussed how restaurants would safely reopen for dine-in customers effective immediately. Kemp also took a few moments to introduce the new data dashboard: “We realized as a team that we can provide a more unified, user-friendly platform for Georgians in every corner of our state.”

The next day, the health department’s Soetebier vented to his higher-up, Dr. Toomey: “As you know we were given a new website for the public yesterday for which we have had little input on to date and for which we no longer have direct control.” He also made clear that SAS should take responsibility for any dashboard problems. “I have asked them to own the ongoing list of issues that are identified with the dashboard and to commit to reviewing their progress on them with us regularly,” he wrote.

The public reaction to the dashboard was negative and swift. An AJC article two days after the dashboard’s launch noted that it “confused ordinary Georgians as they decide whether Gov. Brian Kemp was right to begin reopening the state’s [businesses]” and was “making it difficult for the public to determine if Georgia is meeting a key White House criteria for reopening.”

“A lot of people are now accusing us of trying to hide data and/or misrepresenting . . .” Three days after the dashboard’s launch, Megan Andrews, the health department’s director of government relations, forwarded a roundup of constituent complaints to SAS’s Blackmon, asking for assistance in responding to the concerns expressed in the constituents’ emails.

Blackmon replied, “We will get you answers ASAP.” Four days later, Andrews’s deputy, Emily Jones, sent a follow-up email: “We are really in need of some answers for constituents,” she wrote on May 4. “A lot of people are now accusing us of trying to hide data and/or misrepresenting, so getting them information quickly is important.”

Particularly worrying to Jones was the concern several constituents had raised about perceived manipulation of the data to artificially show a decrease in cases. They “believe that these graphs are intentionally designed to show a downward trend and are wondering if a better explanation of the methodology can be given,” she wrote.

SAS’s Blackmon seemed to think the existing explanation on the dashboard was enough: “There is a clear asterisk under the chart” explaining that the last 14 days in the chart may be missing cases, he wrote. “That is what I have been telling people,” replied Jones, “but I wanted to make you aware that we are getting several of these inquiries a day.”

The next Saturday, May 9, a Twitter user called out an egregious graphic on the dashboard. “I’m sorry but I have to curse your twitter feeds with this nightmare graph from @GaDPH,” she tweeted. “The X axis shows dates, BUT not in chronological order for some godforsaken reason.” In an attached image captured from the dashboard, cases descended from left to right, at first glance suggesting a downward trend as time progressed—but as the out-of-order dates indicated, time was not actually progressing but jumping all over the place.

Behind Georgia’s Covid-19 dashboard disaster A screenshot of the chart published on Georgia’s Covid-19 dashboard in May that falsely showed a decrease in the state’s infections—by rearranging the order of the dates at the bottom. Other Twitter users were quick to speculate about the explanation for the chart’s unusual configuration: “Oh, we know the reason. A clear attempt to make the data say what they want it to say, rather than just letting it speak,” wrote one. Journalists also were perplexed: “Only in Brian Kemp’s Georgia is the first Thursday in May followed immediately by the last Sunday in April,” a Washington Post columnist quipped. Pete Corson of the AJC tweeted that the graphic had been “the subject of much head scratching” at his publication.

In a response to Corson, Kemp’s director of communications, Candice Broce, implied the health department was to blame: “The graph was supposed to be helpful,” she tweeted, “but was met with such intense scorn that I, for one, will never encourage DPH to use anything but chronological order on the x axis moving forward.”

Over the next two weeks, a volley of errors emerged from the dashboard: A chart showing Covid-19 cases by race mistakenly included a diagnosis date in 1970, making it unreadable; the total case number inadvertently included—then abruptly expunged—231 serology test results, resulting in a confusing decrease in positive cases between reporting periods; and data points went missing from charts depicting individual counties’ daily case numbers.

A May 19 AJC article explored multiple explanations for the mistakes, quoting Broce as saying of the health department: “We are not selecting data and telling them how to portray it, although we do provide information about constituent complaints, check it for accuracy, and push them to provide more information if it is possible to do so.” Although the story noted a Kemp aide had blamed “a software vendor” for the widely ridiculed nonchronological graph, it did not give further detail on the extent or nature of the vendor’s responsibility.

The next morning, the Columbus Ledger-Enquirer reported that the dashboard’s misstep with the serology tests “artificially lowers the state’s percentage of positive tests.” (Emails indicate that the dashboard’s errors stemming from the tests were due to the health department misclassifying them. “This is not a technical issue per se with the website,” Soetebier wrote to his health-department colleagues and Kemp’s chief management officer, Caylee Noggle.)

Amid the fresh wave of public rancor in the wake of the story, the health department’s Edison warned in an email the next day to Noggle that “by rushing through data analyses, we run the risk of making errors.” Edison proposed that a clarifying footnote be added to the dashboard. “It takes time to work through these complicated and far from perfect data.”

Four hours later, Edison sent Noggle and several other Kemp staffers a four-page data FAQ of sorts to post to the site. Balasubramanian, the governor’s technology advisor, forwarded the document to the SAS team with a request to post it to the website—but two minutes later, he walked that request back: “Hold on, don’t POST,” he wrote. “Please review and let me know if you have any suggestions.” (Kemp staffers later stripped almost all of the explanatory content from the data FAQ the health department team had written.)

At a May 21 press conference, Kemp addressed some of the public derision related to the dashboard. Citing his administration’s commitment to transparency and honesty, he praised Dr. Toomey and the health department: “They are taking massive amounts of data from all sources, putting them into accessible format under a global spotlight, all at breakneck speed,” he said. “Please afford them some patience, and please steer clear of personal attacks.”

But Kemp did not mention his own team’s role in creating much of the pressure the health department was under, nor the fact that some of the highest-profile mistakes had not been the health department’s errors at all.

“It’s a fair point that it could look like we’re ‘moving the goalposts’ . . .” Emails also show that when health-department staffers sought potential fixes with SAS, their requests were not treated with a sense of urgency. In early June, Leslie Onyewuenyi, a newly hired interim director of informatics who was brought on to work above Soetebier and improve data quality at the health department, asked SAS’s Blackmon for a 30-minute call to review SAS’s quality-control process.

“I don’t believe that there is a need for a call unless [the health department’s] Karl [Soetebier] would like for us to convene,” Blackmon responded.

“We need a high level overview of process flow on your end,” Onyewuenyi wrote back. “Are there any quality control checks on your end before the data is published? The aim of this exercise is to reduce the risk of publishing inaccurate data whether from DPH side or from your end.”

After Onyewuenyi appeared not to get a response to this email or to a follow-up one he sent three days later reiterating his request, the governor’s planning and budget office intervened to set up a call between Onyewuenyi and Blackmon, noting that Blackmon was on vacation.

“We’ll respond on email first,” a SAS project manager wrote. “We can then follow-up as needed.”

At around the same time, Balasubramanian forwarded to SAS a media question that had been sent to the health department about a county map: Why was the threshold for a county to be shaded red—indicating the highest case rates in the state—changing from day to day?

SAS responded by forwarding an explanation from one of its systems engineers: “It’s a fair point that it could look like we’re ‘moving the goalposts’, it might be something we could revisit.” But the method behind the color-coding would remain unchanged until, more than a month later, a viral tweet pointed to it as an example of how the health department “is violating data visualization best practices in a way that’s hiding the severity of the outbreak.”

Trent Smith, a senior external communications specialist with SAS, responded to a series of Atlanta’s questions about its work on Georgia’s Covid-19 dashboard by stating: “We can’t share customer names without their permission.” Smith also wrote: “SAS has been used for decades in public-health departments, from local to state to national governments and is currently in all U.S. state health departments.”

As the health department was publicly battered for mistakes over which it had little control, its leadership was well aware of the need to improve the dashboard and the magnitude of the fallout from its problems. On the Fourth of July, after reviewing examples of other states’ data dashboards, Dr. Toomey asked health-department staff to request that the SAS team add certain metrics to the dashboard and noted the negative public perception of her agency: “I am getting complaints from the public as well as other officials that we are deliberately not being transparent.”

Some of the state’s public-health experts felt Georgians deserved Covid-19 analysis and insight beyond what the SAS dashboard ultimately offered, and they tried— unsuccessfully—to offer that info on the health department’s site.

Back in March, at the same time SAS began what would be a six-week effort to build its dashboard, a team from another government agency was creating other Covid-19 dashboards for internal use.

Susan Miller, who leads the Georgia Geospatial Information Office (GIO), began working on maps to assist other state agencies in allocating pandemic-related resources in March. Her team used a product made by the California-based company Esri. No other mapping platform on the market is as “comprehensive, holistic, or stable,” as Esri, she says. (Miller worked for the company as a product engineer in the early 2000s.)

In mid-April, the health department’s Edison asked Miller’s team to create a report aimed at providing the governor—and, possibly, the general public—with the data Georgians would need to decide when it was safe to reopen for business. She asked if these could take the same form as one of the internal dashboards the team already had created.

Once Miller’s team got started on the project, it took less than a week for a prototype to come together. The GIO’s Esri dashboard, compared with the SAS dashboard, had “increased functionality, such as ZIP code level data, death demographics by county/zip, and downloadable data,” wrote the health department’s Edison in an email to Miller and colleagues at other agencies on April 28, one day after the SAS dashboard launched. “I do not think the SAS Dashboard has the functionality that the ESRI one has and I think they can be used in tandem to complement each other.”

Esri’s software and the use of its consulting services weren’t free: The contract Miller’s parent agency signed with Esri’s Disaster Response Program in May totaled $265,000. But those dollars went toward Esri’s work on multiple mapping projects for a variety of agencies.

Health-department officials were hopeful about sharing the Esri dashboard on their agency’s website. “My goal is at a minimum to make this accessible from a link on the page,” Soetebier wrote on April 29 to health-department colleague Edison and staffers at GIO and Esri, “though we should be able to get a new page put together to properly house it.”

On May 13, Edison forwarded to Miller an announcement about a CDC partnership with Esri aimed at enabling all states—at no cost to them—to build or enhance data dashboards using the software. The next day, Edison exclaimed in an email to Miller, Soetebier, and an Esri employee: “We have some traction!” She wrote that two people from the governor’s office “are going to pitch the dashboard!”

But the Esri dashboard would not end up being included or even noted anywhere on the health department’s site. It was published on the GIO’s Covid-19 website, but it wasn’t publicized until Miller’s office published a blog post about it three months later, in mid-August—and even then, the existence of the dashboard remained largely unheralded for several more weeks.

Eventually, one government agency would find value in the multiple Esri dashboards Miller’s team had produced and published on GIO’s Covid-19 website. In September, GEMA replaced its daily Covid-19 situation report with that website, calling it “a one-stop shop for all of the data in a format that is more easily accessible.”

At the most concrete level, the problems with the state’s Covid-19 dashboard made it unreliable as a tool for Georgians simply trying to figure out how to safely go about their lives. As Georgia planned to reopen its doors for business in late spring, the health department fielded an onslaught of questions and complaints from people confused about how to interpret what they were seeing on the dashboard. The lead pastor at a church in Cobb County wrote for assistance understanding how rampant the virus was locally in the hopes of helping his church determine when to reopen for in-person worship. The assistant superintendent of a school district south of Seattle requested an explanation of conflicting case numbers in the hopes of advocating to reopen his own state; “I would like my state open, and Georgia serves as a bellwether,” he wrote. “Please explain the data so that I can advocate correctly and not put my community at risk.”

In the next three months, Georgians celebrated Memorial Day and the Fourth of July, and Governor Kemp squashed mayors’ efforts to enact local mask mandates and other protective measures. Also in that time, more than 155,000 Georgians were infected with the novel coronavirus, of whom 2,551 died.

Beginning in late July, the dashboard stopped attracting as much negative attention as it had early on. Although two public-health experts recently told Atlanta they would like to see additional data on the dashboard, such as case information by zip code and information related to school outbreaks, public outrage over the dashboard’s appearance has largely ebbed.

But public-health experts say the damage to the health department’s reputation caused by the dashboard’s pattern of problems may have lasting effects. In a statewide survey the health department conducted in late July, only 55 percent of respondents perceived the agency as credible. Amber Schmidtke, a volunteer advisor to the state’s Covid-19 Data Task Force who until recently was an assistant professor of microbiology at Mercer University in Macon, recalled several fumbled efforts at transparency on the state’s Covid dashboard, concluding: “So, yeah, I think it does harm people’s trust.”

Melanie Thompson, an Atlanta doctor and researcher who coauthored two July letters protesting Kemp’s handling of the pandemic that were signed by thousands of healthcare workers, says the contents of her inbox made the public’s loss of faith plain: “The emails and things that I got from a variety of people made me feel that there is no trust in the governor to do the right thing scientifically,” she says, “and that extends to the Department of Public Health, because [its] commissioner basically serves at the pleasure of the governor and does not contradict him at all.”

When public trust in an institution is sufficiently eroded, it can be hard to recover, says Joseph Capella, a specialist in health communication at the University of Pennsylvania’s Annenberg School for Communication. “It’s the old idea of poisoning the well,” he says. When public-health institutions lose credibility as a consequence of one misstep, he says, the resulting lack of trust can impact their ability to effectively carry out other public-health activities, like vaccine distribution.

Clarity about who’s doing the work on state websites is important, too, says Laura Harker, a senior policy analyst at the Georgia Budget and Policy Institute. When a consulting company’s work is presented on an agency’s website, “having that made clear somewhere—at least the name on the bottom of who the outside contractor is, or some type of contact information for the data managers—is always, I think, important to have for transparency purposes,” she says.

The state of Georgia slashed the health department’s epidemiology budget during the lean years of the recession—from $6 million annually in 2009 to less than $4 million in 2011—and that budget was never fully restored. Georgia’s public-health funding lags well below the national average.

“People are thinking that public health has failed society,” says Dr. Madad, the New York City–based epidemiologist and preparedness expert. “No. Society has failed public health because we didn’t invest and see the value of it. And we’re seeing the consequences today.”

The early chaos of the Covid-19 dashboard shows how Georgia squandered the chance to shine a light on the merits and necessity of a public-health department, says Thompson. “This was an opportunity for DPH to shine, . . . to come into its own, and to really teach the public what public health is all about, to really engender trust.”

On April 28, the day after the SAS dashboard launched, health-department epidemiologist Edison and GIO head Miller exchanged emails about the difficulty of getting the best Covid-19 data to the public and the need for a more collaborative effort among government agencies. “My head is spinning,” Edison wrote. “I just want to share the damn data.”

Miller responded: “We can either feed the real data to Georgians, the country and the world . . . or let them fend for themselves. . . . I will back you on getting the data out until the end of time!!!!”
 


Massive Health Data Warehouse Delayed Again, A Decade After Texas Pitched It
The Texas Tribune
By Jim Malewitz and Edgar Walters
August 15, 2016

Link to original article

Texas health regulators are starting from scratch in designing a system to store massive amounts of data — after spending millions of dollars trying to roll out a version that’s now been scrapped.

Charles Smith, executive commissioner of the Texas Health and Human Services Commission, said Monday that his agency had recently nixed a $121 million contract to create an Enterprise Data Warehouse, an enormous database that would store a wide range of information about the many programs the agency administers. First funded in 2007, the project was expected to be up and running a few years after.

Because the original design would not link enough programs at the sprawling agency, regulators would essentially start from scratch on a much larger — and therefore more useful — system, Smith told members of the Texas House State Affairs Committee at a hearing on state contracting reform efforts.

"We were in the process of building a two-bedroom, two-bath home," he said, likening the effort to a home construction project. "You get it ready to prep your foundation, and I realize my spouse is pregnant with quadruplets."

The most recent design, which was largely focused on storing data on Medicaid and the Children’s Health Insurance Program "isn’t going to meet the needs of our family," he added.

The update stirred concerns from some lawmakers about the lack of progress on a pricey project with a troubled history.

"Thirty-five million dollars we’ve spent on a project that was supposed to cost $120 million. For that, we have nothing?" asked Rep. Dan Huberty, R-Houston.

"Are we getting back to where we started?" asked Rep. Four Price, R-Amarillo.

Texas has spent $35 million on the project so far, with most coming from federal funds, said Smith, who was appointed to his post in May. About $6 million was tapped from state funds.

Smith did not have an estimate about how much the new, larger project would cost, because those assessments won’t begin until next fall — after the legislative session that begins in January.

He pushed back against suggestions that spending thus far was for naught, noting that the agency — as part of the planning process — had moved to a new software system that would be used in the new data warehouse.

"We’ll go through and develop a plan, and a timeline, and we’ll come back next session with everything we need to obtain through the process," he said.

Since the project was first funded, it has suffered myriad delays, as well as uncertainty about whether the federal government would pitch in with additional funding.

In 2013, the Health and Human Services Commission finally invited private companies to submit proposals for the contract. The next year, state officials chose Truven Health Analytics, a Michigan-based firm, as their tentative winner.

But after a series of contracting scandals at the agency prompted the resignation of several high-ranking officials, the state started over, and in November 2014 asked companies to re-apply for the funding.

Those proposals were due in February 2015, and state officials anticipated the project would begin on Sept. 1 of that year, according to the state’s latest published timeline for the project.

At the time, a spokeswoman for the health commission told the Houston Chronicle that the quality of the project was “more important than the timeline.” The agency nonetheless said it was “still possible” the project would be up and running by the end of 2015.

Smith said his agency needed a warehouse that would give his agency instant access to more data than the scrapped plans accounted for — such as information related to foster care.

"I’m talking to our staff about what is the capacity of our system," he said. "We don’t know how many families are willing and able."

Such concerns come at a time when his agency is growing in size and scope. Three of the state’s five health and human services agencies are consolidating into a single "mega-agency" — a reorganization ordered by state lawmakers in 2015.

The other two agencies, which oversee the state’s foster care system and public health infrastructure, respectively, will be considered for consolidation in 2017. State leaders have said that changing the Health and Human Services Commission’s configuration would streamline services and improve efficiency.

Some lawmakers took heart that Smith had refused to follow through with the warehouse’s original design, calling it a thoughtful approach.

"It sounds like the contract was inadequate," said Rep. Byron Cook, a Corsicana Republican who chairs the State Affairs Committee. "I appreciate that."
 


Problem-plagued Texas data project delayed again
Houston Chronicle
By Brian M. Rosenthal
Tuesday, June 28, 2016

Link to original article

AUSTIN -- Texas state health officials once again are delaying a massive data project that has struggled to get off of the ground for more than a decade.

The state Health and Human Services Commission informed lawmakers Tuesday it was pausing the "Enterprise Data Warehouse" project, a plan for an elephantine database housing dozens of information sets about everything from welfare benefits to Medicaid.

"HHSC and the other Health and Human Services agencies are going through a transformation process..." the commission explained to the lawmakers. "Therefore, we are reevaluating our long-term data needs and want to ensure the best investment of state resources."

In a separate letter to the company that was set to run the project, the state officials said they would "revisit this necessary project after the transformation process has been substantially completed."

The commission said it was canceling the contractor solicitation process altogether, which means that even if officials decide to restart the project, it will be years before a vendor is chosen.

The decision is the latest twist in a project that has experienced an almost-comical series of setbacks and controversies.

First discussed in 2005, the project was envisioned as a way to improve services and spur savings through better data analysis. Lawmakers funded the project in 2007, calling for it to be operational by February of 2009.

Over the years, state budget writers have set aside more than $100 million for the project -- money that could not be used elsewhere -- and spent more than $12 million, mostly on consultants.

After a slew of delays caused by both the state and federal governments, the health commission thought it finally had gotten the project on track in the spring of 2014, when officials began negotiating a contract with Truven Health Analytics of Ann Arbor, Michigan.

Then came the eruption of a contracting scandal over alleged favoritism by commission officials toward another data company, 21CT of Austin. In a meeting in August of 2014, commission lawyer Jack Stick, who already had steered a Medicaid fraud detection project to 21CT, seemed to imply in a meeting that that company could do the Enterprise Data Warehouse for less money than Truven.

Two weeks later, negotiations with Truven were over. The commission blamed the company's asking price and said there had been a leak that led Truven to learn about Stick's comment.

Stick and four other commission officials eventually resigned in connection with the 21CT scandal, and the Medicaid fraud project was canceled.

The data warehouse project was put out for bid again in November of 2014.

This February, the health commission disclosed that Truven once again had emerged as the winning bidder and would be given a $104 million contract -- nearly $35 million less than what was being discussed in 2014, said the spokesman, Bryan Black.

"The Health and Human Services Commission is excited the contract is signed and we are moving forward," Black said in February.

The fate of the contract may have shifted when former Executive Commissioner Chris Traylor retired last month. His replacement, Charles Smith, opted for the new approach, records show.




International News - Health Data



Date last updated: Dec-19-2018

Send corrections or suggestions

Copyright 2020 · EHDP Home Page