Written By Bill Hewitt
Followers of the syndicated TV quiz show Jeopardy! were riveted by the recent performance of IBM’s Watson artificial intelligence computer system in head-to-head competition against the program’s two all-time best human contestants.
Watson had been built by IBM’s DeepQA projectto respond to clues presented in natural language. Watson’s success in the man vs. machine demonstration offered viewers, at least briefly, the hope that with the right sort of technology, the growing glut of data out there could somehow be tamed.To prep for the show, Watson had access to 200 million pages of structured and unstructured content, taking up four terabytes of storage in its memory – which included the full text of Wikipedia. But even that was just a drop in the bucket of data generated daily by governments, corporations and research institutions, as well as by private individuals using smartphones, laptops and other consumer devices.
Wal-Mart, for example, handles more than a million customer transactions every hour. Facebook hosts more than 50 billion photos. Google has set up thousands of servers in huge warehouses to process searches. The volume of wireless electronic chatter – a small portion of which could have vital national security significance – has grown exponentially. Indeed, 90 percent of the data that exists today was created within just the last two years. If the volume of knowledge at the dawn of the 20th century could fit into a shoebox, that knowledge today would fillGillette Stadium 20 times over.
It is a pattern of growth driven by such rapid and relentless trends as the rise of social networks, video and the Web. Particularly for organizations struggling to keep on top of their most critical missions, providing visibility into, and actionable business intelligence out of the explosive surge in data, has created unprecedented challenges.
That’s because big data causes big problems for companies, as well as for our economy and national security. Look no further than the financial crisis. Near the end of 2008, when the global financial system stood at the brink of collapse, the CEO of a global banking giant during a conference call with analysts was repeatedly asked to quantify the volume of mortgage-backed security holdings on the bank’s books. Despite the bank’s having spent a whopping $37 billion on IT operations over the previous 10 years, his best response was a sheepish: “I don’t have that information.”
Had regulators and big banks been able to accurately assess their exposure to subprime lending, we might have dampened the recession and saved the housing market from its biggest fall in 30 years.
Not surprisingly, the business of information management – helping organizations make sense of this ever-expanding store of data – is growing tremendously. In just the past few years, Oracle, IBM,Microsoft, EMC and SAP have spent more than $20 billion buying software firms that specialize in data management and analytics. Today, that industry is estimated to be worth more than $100 billion and growing at almost 10 percent a year — roughly twice as fast as the software business as a whole.
But while the bigness of information volume poses tremendous management challenges, the badness of much of that data makes it even worse. Data today originates from a complex web of transaction, market and social media data – not just from a company’s own ERP system. And much of it is sloppy, misleading, exaggerated or false. Bad data, of course, is nothing new. For years, companies have focused on cleaning up their flawed data files. But that may no longer be enough; the time has come to keep bad data from infecting the business environment in the first place.
Instead of remediating bad data, businesses, government agencies and research institutions need to create an information architecture that allows them to follow their data as it moves among different users in their organizations. Through that framework, they could hold the sources of information accountable for their data by monitoring how accurate it is, seeing how consistent it is, and checking how good its quality is – ultimately contributing to improvements in both their intelligence, their processes and their performance. By strategically managing data as a shared enterprise asset, companies can react to emerging trends faster than their competition. Research institutions can spot storm systems, epidemics and other critical patterns in their formative stages. And national security agencies can keep a step ahead of potential threats.
Fortunately, there are some encouraging case studies. Take Independent Health, of Buffalo, N.Y. It is consistently one of the highest-rated health plans in the nation by U.S. News and World Report. In 2009, the company faced rising healthcare costs and imminent industry reform as a result of the national healthcare bill that would soon become law.
Most health insurers are committed to ensuring positive member outcomes while reducing healthcare costs, but many are plagued with issues that affect their ability to better understand how to achieve this. Independent Health was quick to recognize its future needs in order to keep up with conflicting data sources, complicated analytical reports and regulatory requirements. As a result, the company determined it needed to receive the right data that would help it better manage healthcare costs while simultaneously improving service and outcomes for the individuals insured under their plans.
Disparate data from many different sources was not easy to leverage, which is why Independent Health decided to change the way it managed its information. The flexibility of the new information management model provides a full and rapid view of all external, internal and claims-based data and meet Independent Health’s growing needs.
By creating a framework to undertand member, employer and proivder data, the company was able to move from a claims-centric data warehouse architecture to one that is information-based and enables them to integrate the data sources faster, allowing the use of more comprehensive information.
Companies that succeed in creating frameworks that cross internal functions, free of departmental silos, gain a competitive edge. When departments collaborate to create a shared vision for high quality data, the entire organization – along with their customers and shareholders – win. Sort of like Watson.
Bill Hewitt is CEO of Kalido, a provider of data governance software.
please give me comments thanks
1 comments:
Excellent information, I like your post.
Latest Entertainment News in India
Current Political News in India
Current Technology News in India
Post a Comment