Why the fear over ubiquitous data encryption is overblown

Mike McConnell is a former director of the National Security Agency and director of national intelligence. Michael Chertoff is a former homeland security secretary and is executive chairman of the Chertoff Group, a security and risk management advisory firm with clients in the technology sector. William Lynn is a former deputy defense secretary and is chief executive of Finmeccanica North America and DRS Technologies.

More than three years ago, the former national security officials penned an op-ed to raise awareness among the public, the business community and Congress of the serious threat to the nation’s well-being posed by the massive theft of intellectual property, technology and business information by the Chinese government through cyber-exploitation.

In the wake of global controversy over government surveillance, a number of U.S. technology companies have developed and are offering their users what we call ubiquitous encryption — that is, end-to-end encryption of data with only the sender and intended recipient possessing decryption keys. With this technology, the plain text of messages is inaccessible to the companies offering the products or services as well as to the government, even with lawfully authorized access for public safety or law enforcement purposes.

The FBI director and the Justice Department have raised serious and legitimate concerns that ubiquitous encryption without a second decryption key in the hands of a third party would allow criminals to keep their communications secret, even when law enforcement officials have court-approved authorization to access those communications. There also are concerns about such encryption providing secure communications to national security intelligence targets such as terrorist organizations and nations operating counter to U.S. national security interests.

Several other nations are pursuing access to encrypted communications. In Britain, Parliament is considering requiring technology companies to build decryption capabilities for authorized government access into products and services offered in that country. The Chinese have proposed similar approaches to ensure that the government can monitor the content and activities of their citizens.

We recognize the importance our officials attach to being able to decrypt a coded communication under a warrant or similar legal authority. But the issue that has not been addressed is the competing priorities that support the companies’ resistance to building in a back door or duplicated key for decryption. We believe that the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.

First, such an encryption system would protect individual privacy and business information from exploitation at a much higher level than exists today. As a recent MIT paper explains, requiring duplicate keys introduces vulnerabilities in encryption that raise the risk of compromise and theft by bad actors. If third-party key holders have less than perfect security, they may be hacked and the duplicate key exposed. This is no theoretical possibility, as evidenced by major cyberintrusions into supposedly secure government databases and the successful compromise of security tokensheld by a major information security firm. Furthermore, requiring a duplicate key rules out security techniques, such as one-time-only private keys.

Second, a requirement that U.S. technology providers create a duplicate key will not prevent malicious actors from finding other technology providers who will furnish ubiquitous encryption. The smart bad guys will find ways and technologies to avoid access, and we can be sure that the “dark Web” marketplace will offer myriad such capabilities. This could lead to a perverse outcome in which law-abiding organizations and individuals lack protected communications but malicious actors have them.

Finally, and most significantly, if the United States can demand that companies make available a duplicate key, other nations such as China will insist on the same. There will be no principled basis to resist that legal demand. The result will be to expose business, political and personal communications to a wide spectrum of governmental access regimes with varying degrees of due process.

Strategically, the interests of U.S. businesses are essential to protecting U.S. national security interests. After all, political power and military power are derived from economic strength. If the United States is to maintain its global role and influence, protecting business interests from massive economic espionage is essential. And that imperative may outweigh the tactical benefit of making encrypted communications more easily accessible to Western authorities.

History teaches that the fear that ubiquitous encryption will cause our security to go dark is overblown. There was a great debate about encryption in the early ’90s. When the mathematics of “public key” encryption were discovered as a way to provide encryption protection broadly and cheaply to all users, some national security officials were convinced that if the technology were not restricted, law enforcement and intelligence organizations would go dark or deaf.

As a result, the idea of “escrowed key,” known as Clipper Chip, was introduced. The concept was that unbreakable encryption would be provided to individuals and businesses, but the keys could be obtained from escrow by the government under court authorization for legitimate law enforcement or intelligence purposes.

The Clinton administration and Congress rejected the Clipper Chip based on the reaction from business and the public. In addition, restrictions were relaxed on the export of encryption technology. But the sky did not fall, and we did not go dark and deaf. Law enforcement and intelligence officials simply had to face a new future. As witnesses to that new future, we can attest that our security agencies were able to protect national security interests to an even greater extent in the ’90s and into the new century.

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.

 

Editor’s note: Article reposted from ‘The Washington Post’


Mike McConnell, Michael Chertoff, William Lynn. “Why the fear over ubiquitous data encryption is overblown”

The Washing Post. N.p., Web. 23 June. 2016.

Analytics could be the key to cyber defense

With Defense Department networks under constant attack, officials have been at pains to develop necessary defensive measures. One approach: big data tools and analytic capabilities that have played a big role in the past and will continue to be vitally important in defending against a vast array of attacks.

Many have called for more automation in responding to cyber incidents given the rapid pace cyber attacks occur. DOD Terry Halvorsen, however, is taking this a step further. “I want autonomous basic security tools – not automated, I want autonomous basic security tools that I can just let go that will look at my network, sensor it, and say, ‘You know what, there’s an attack happening here, we’re immediately going to quarantine this part of the network, we’re going to add some security protection,” he said at the Brocade Federal Forum on June 15, while requesting industry help in this area. “I can’t have people in that loop…it’s too fast.”

Analytic tools can help monitor network traffic and the threats coming across. These tools include the Cybersecurity Situational Awareness Analytic Cloud, or CSAAC, which aggregates and fuses data from various sensors and endpoints to analyze potential threats across the network, David Mihelcic, Defense Information Systems Agency CTO, said at an AFCEA sponsored breakfast June 15.

According to DISA, CSAAC allows for more informed decision-making based upon broader information sets driven from open source and classified components in addition to leveraging community tech transfers from other DOD entities. CSAAC also supports the Joint Information Environment – a unified command and control IT architecture shared across all the services – and the Joint Regional Security Stacks, enabling greater cross-DOD collaboration and stronger defense of the DOD Information Network.

Mihelcic announced plans to upgrade CSAAC’s underlying technology in August. This update to DISA’s big data platform will enable data in the cloud to be copied and have custom mission focused analytics run on top that don’t interact with the rest of the platform. The benefit here is “we’ll be able to take either commercially developed analytics or analytics…operated out in the field and run those against some or all of that data without necessarily having it interact with the purpose-build and certified core analytics,” Mihelcic said. This capability will really accelerate the development and deployment of analytics at the tip of the spear, he added, noting that it will enable analytics to be built on the fly.

Other analytic tools include indicators, which include reports of malicious activity. “What happened prior to our analytics is that we received these reports and by hand we would have to go and translate these reports into figuring out, OK, here’s the various countermeasures, so here’s the blocks where we’re going to put different tools to be able to defend ourselves against whatever these threats are,” Jack Wilmer, vice director for the development business center at DISA, said at the same breakfast. “So we were able to automate a lot of that and I think there’s 500 percent increase in the amount of countermeasure that each analyst could implement, basically, per day, which yielded pick your number of thousands of additional countermeasures that we could deploy every month, year, etc.”

Wilmer added that there are significant investments being made in this area. “There seems to be an endless stream of desire for, ‘Hey, maybe we could take various sources of data and come up with this metric or this analytic or all kinds of other areas,’” he said. In line with Halvorsen’s plea to industry, Wilmer said there is the desire for “more of a near real-time ability to do some of these defenses, so not necessarily having to have the people in the loop to implement things.”

Mihelcic also noted there are several opportunities for industry in hunt tools, something he said he expects to see more of in the future. The Cyber Protection Teams – which will number 68 of the eventual 133 cyber teams under Cyber Command and focus specifically on DOD’s number one mission, defense of the network – use tools to find adversaries on the network. These tools could be used “on a persistent basis to look across the information that’s available in the network to look for adversaries,” he said.

Mihelcic told Defense Systems following the panel that there are at least three commercial companies he knows of working on hunt tools, though he declined to name them. He added that these tools could and should be used by everyday administrators in addition to CPTs. “I think we’re going to need these hunt tools for our day to day systems and cyber administrators so essentially they can on a regular basis try to use the data out of the network to identify adversaries and then pass that along to the CPTs to actively eject them from the network,” he said.

The hunt mission is somewhat of a change in procedure for DOD and DISA. “The biggest change both in DOD and the commercial world … is we’re going out and hunting for the enemy on a daily basis,” John Hickey, DISA’s cyber security authorizing official, said in January. “We don’t really talk about where we’re hunting, obviously, we don’t even tell the people on the inside where we’re necessarily hunting things and we’re certainly not going to tell the folks on the outside, right?”

Officials also discussed the need for vigilance. “In almost every attack that we see … bad guys exploit the same old preventable vulnerabilities that we’ve been saying we need to prevent for 20-25 years,” DOD’s Deputy CIO for Cybersecurity Richard Hale, said.

“We’ve got to be vigilant about patching those systems. We’ve got to be vigilant about operating the systems – not just talking about the cybersecurity professionals, for the system administrators, monitoring logs, etc,” Mihelcic added.

Editor’s note:


Mark Pomerleau. “Analytics could be the key to cyber defense– DefenseSystems”

DefenceSystems. N.p., Web. 16 June. 2016.

 

Unscrambling the future of encryption

As the more subtle attempts at undermining security become impossible, spies will have to find alternative routes to access their targets. Earlier this year the UK government published the legal framework under which GCHQ and other British spies can hack, use bugging devices (or even steal and replace) computers, servers, routers, laptops, and mobile phones to either obtain information or conduct surveillance.

The guidelines create a legal framework for such behaviour under UK law, and even okays potential intelligence gathering activities which involved hacking attempts against people who are themselves not targets of intelligence agencies.

This gives some credence to Snowden’s recent claim that intelligence agencies are targeting IT staff because they have access to systems and databases.

It’s also worth noting that, despite the anguished howls from law enforcement, spy agencies and others still have plenty of data left to sift.

Firstly, encryption is really, really hard to get right: as projects like Bullrun and others have proved, the intelligence agencies and law enforcement still have plenty of ways around it. There are legal tools, for example: the UK has legislation in place which makes it an offence to not hand over encryption keys when requested by law enforcement, punishable by up to five years in prison.

And while many tech companies may well encrypt customers’ data when it is on the move — such as between datacentres — many will not secure it entirely using end-to-end encryption.

Why? Simply because they need to look at that your email or web browsing themselves in order to sell advertising against the subject matter of the email.

The advertising-driven business models of Silicon Valley rule out the pervasive use of strong end-to-end encryption, and that means intelligence agencies and police can continue to gain access to vast amounts of information.

Police and intelligence agencies still have plenty of other data sources — the metadata on communications, including who you have called, when, and for how long, CCTV, and more.

“Law enforcement agencies have access to more data now than they have had in the history of time. Pre-Facebook, how hard would it be for any law enforcement agency on the planet to find out all your known associates? They’d have to question dozens of people to find out who it is you know. They are able to get access to vast amounts of information just by asking,” said Privacy International’s Hosein.

“They complain that they’re not getting enough information but they’ve had more than they’ve ever had before,” he added.

Edinburgh Napier University’s Buchanan echoes the sentiment: “There are now so many ways that investigators can actually investigate someone who is suspected of committing a crime there isn’t really a problem. This isn’t going to shut the door.” Good old-fashioned policing and follow-the-money are still the most effective ways of catching the bad guys.

And widespread usage of strong encryption is not the worst scenario for the spies: harder to crack and harder to detect technologies are already either in existence or in development.

One such technology is steganography — hiding communications within digital images — and it’s incredibly hard to spot. Equally, quantum encryption could do away with the inherent weakness of the public key infrastructure systems used today and make messages impossible to intercept.

Still, even the experts don’t really know how the future of encryption is going to play out: there is apparently no way of accommodating both the desire of the intelligence agencies to be able to access the data they want with the safe and secure working of the web as we know it.

They are mutually exclusive, and mutually antagonistic. Like the best encryption, the problem of making national security and privacy work together seems uncrackable.

“Many of us agree with the sentiment — I am one of them — that from a security perspective you don’t want people who would do you harm being able to talk in secret. But at the same time if your answer to that is to ban encryption, that is a very bad way; the technology is not good or evil, it is the people using it,” said the University of Surrey’s Woodward.

“If we can’t secure these things, then people will die.”

Technology is unlikely to offer a way out of this impasse. As the power of supercomputers (or more likely giant cloud arrays) continues to grow, it’s easy enough to increase the size of the key — from 516, to 1024, to 2048 and onwards.

Even quantum computers, long touted as a way of cracking all encryption almost immediately, become widespread the reality is that, although they would undermine encryption in one way, they will also boost it again (thanks to something called quantum key distribution). And as Woodward notes “we’ve been talking about viable quantum computers since the 80s and they’re always 10 years away.”

But the stakes may continue to rise, as least from a certain point of view.

“The security of our common computing infrastructure is even more important now than it was back then. Back in the 1990s, the reason we won was because every economy wanted to be the best marketplace for ecommerce on the planet so they knew they could not put constraints on security technology if they wanted to enable all that ecommerce,” said Privacy International’s Hosein.

And soon those issues of privacy and security will become as concrete as the buildings we live in. With the advent of smart grids, the internet of things and smart cities, we will be using the web to monitor and control real-world systems. “If we can’t secure these things, then people will die,” he warns.

This also raises another issue: as our houses and even clothes are filled with sensors, what sort of privacy is appropriate? Is it right that we should be snooped on through our smart TV or networked baby monitor, or our webcams or smartwatches? Can we draw a line anywhere?

When President Obama was asked about the issue of encryption his response was nuanced. While he said he supported strong encryption he also noted: “The first time an attack takes place and it turns out that we had a lead and we couldn’t follow up on it, the public is going to demand answers, and so this is a public conversation that we should end up having.”

It’s entirely possible to argue that we don’t need another public debate about encryption: that we had one back in the 1990s. And that privacy had trumped national security when it came to the use of strong encryption. It’s just that the intelligence services didn’t like the answer.

But there are plenty of good reasons why we do need to go over the arguments about encryption again.

“This is a public conversation that we should end up having.”

Back in the 1990s and 2000s, encryption was a complicated, minority interest. Now it is becoming easy and mainstream, not just for authenticating transactions but for encrypting data and communications.

Back then, it was also mostly a US debate because that was where most strong encryption was developed. But that’s no longer the case: encryption software can be written anywhere and by anyone, which means no one country cannot dictate global policy anymore.

Consider this: the right to privacy has long been considered a qualified rather than an absolute right — one that can be infringed, for example, on the grounds of public safety, or to prevent a crime, or in the interests of national security. Few would agree that criminals or terrorists have the right to plot in secret.

What the widespread use of strong, well-implemented encryption does is promotes privacy to an absolute right. If you have encrypted a hard drive or a smartphone correctly, it cannot be unscrambled (or at least not for a few hundred thousand years).

At a keystroke, it makes absolute privacy a reality, and thus rewrites one of the fundamental rules by which societies have been organised. No wonder the intelligence services have been scrambling to tackle our deliberately scrambled communications.

And our fear of crime — terrorism in particular — has created another issue. We have demanded that the intelligence services and law enforcement try to reduce the risk of attack, and have accepted that they will gradually chip away at privacy in order to do that.

However, what we haven’t managed as a society is to decide what is an acceptable level of risk that such terrible acts might occur. Without that understanding of what constitutes an acceptable level of risk, any reduction in our privacy or civil liberties — whether breaking encryption or mass surveillance — becomes palatable.

The point is often made that cars kill people and yet we still drive. We need to have a better discussion about what is an acceptable level of safety that we as a society require, and what is the impact on our privacy as a result.

As the University of Surrey’s Woodward notes: “Some of these things one might have to accept. Unfortunately there might not be any easy way around it, without the horrible unintended consequences. You make your enemies less safe but you also make your friends less safe by [attacking] encryption — and that is not a sensible thing to do.”

Working at the White House, we don’t get easy problems, easy problems get solved someplace else.

And while the US can no longer dictate policy on encryption, it could be the one to take a lead which others can follow.

White House cybersecurity coordinator Michael Daniel recently argued that, as governments and societies are still wrestling with the issue of encryption, the US should come up with the policies and processes and “the philosophical underpinnings of what we want to do as a society with this so we can make the argument for that around the planet… to say, this is how free societies should come at this.”

But he doesn’t underestimate the scale of the problem, either. Speaking at an event organised by the Information Technology and Innovation Foundation, he said: “Working at the White House, we don’t get easy problems, easy problems get solved someplace else, they don’t come to us. This is one of the hardest problems I know about, certainly that’s anywhere close to my job. And I think it’s clearly not one that’s going to be resolved easily, simply or quickly.”

Which brings us back to those civil war codenames, Bullrun and Edgehill, which may serve as an inadvertent, gloomy prophecy about the future effectiveness of the intelligence agencies, unless we have a better discussion about how security and privacy can work together online.

If not, it’s worth remembering the Cavaliers and the Confederates both won the first battles of the English and American civil wars, just as both would finally lose their bloody and divisive civil war. Perhaps, after a few early victories in the new crypto war, the intelligence agencies may face a similar defeat, outpaced by encryption in the long term.

It may be that in a few decades, the spies look back at the tribulations of the first and second crypto wars with something approaching nostalgia.

Editor’s note:


Steve Ranger. “The undercover war on your internet secrets: How online surveillance cracked our trust in the web– TechRepublic”

TechRepublic. N.p., Web. 10 June. 2016.

 

The Encryption Backlash

Of course, it’s often argued that all of this activity is simply the NSA (National Security Agency) doing their job: they break codes and have done so for decades, to make sure that criminals, terrorists, and others cannot plot in secret. If this means exploiting weaknesses in software in order to eavesdrop on those who are plotting crime, then so be it.

As GCHQ (Government Communications Headquarters) told a government enquiry set up after the Snowden revelations: “Our goal is to be able to read or find the communications of intelligence targets.”

From that perspective, they’re doing nothing more than the code-breakers of Bletchley Park did back in WWII — cracking codes in secret to fight the country’s enemies.

But many argue that the analogy doesn’t hold: Bletchley worked on cracking codes used by, and only by, the Nazis. What the NSA and GCHQ have been doing is breaking the codes used by everyone, good and bad, both outside of the US and inside it. By doing so, they risk undermining the security of all communications and transactions.

Those weaknesses and backdoors created or discovered by the NSA and its colleagues elsewhere can be used by hackers and hostile states as easily as they can by our own intelligence agencies. Access for them to spy on the few automatically means insecurity for the rest of us.

As Snowden told the recent CeBIT conference in Germany: “When we talk about security and surveillance, there is no golden key that allows only good guys to read the communications of only terrorists.

Some privacy advocates also argue that no government should ever have such a capability to trawl through the lives of individuals. “It produces an inescapable prison. We can’t let this happen. We have to, as a matter of civic hygiene, prevent it from happening,” Phil Zimmermann, the creator of the PGP encryption algorithm, said recently.

And if the Snowden revelations themselves were an embarrassment for the intelligence agencies, the consequences for their intelligence gathering capabilities have been far worse.

In response the big internet companies such as Yahoo and Google rapidly starting encrypting this traffic to shut out the watchers. As one cryptography expert, Matthew Green from Johns Hopkins University, noted at the time: “Good job NSA. You turned Yahoo into an encryption powerhouse.”

Encrypting data links between datacentres was only the beginning. As the revelations continued to tumble out, more companies decided it was time to increase the privacy of their services, which meant even more encryption.

“If those of us in positions of responsibility fail to do everything in our power to protect the right of privacy we risk something far more valuable than money. We risk our way of life.”

“Encryption has only really become a big issue again because Snowden showed the world how insecure the infrastructure was and how it was being abused by intelligence agencies and so companies started reacting,” said Gus Hosein, the executive director of campaigning group Privacy International.

Perhaps surprisingly, given the decade-long assault on encryption, it seems the fundamentals of it remain strong, so long as it has been well implemented. As Snowden said: “Encryption works. Properly implemented, strong crypto systems are one of the few things that you can rely on,” before adding the caveat: “Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.”

Consumer applications are jumping on the encryption bandwagon. In November 2014, the popular WhatsApp messaging service also switched on end-to-end encryption for hundreds of millions of users who post billions of messages each day.

Using end-to-end encryption like this means law enforcement cannot access the messages sent at all. Previously they have been able to access communications at the datacentre with a warrant, because it would be stored there unencrypted. But end-to end encryption means that from the point it leaves one phone to the point it arrives at the other, the message is scrambled.

Apple’s iOS 8 operating system now encrypts iMessage conversations and FaceTime video chats end-to-end.

“Apple has no way to decrypt iMessage and FaceTime data when it’s in transit between devices. So unlike other companies’ messaging services, Apple doesn’t scan your communications, and we wouldn’t be able to comply with a wiretap order even if we wanted to,” the company says.

Speaking at a cybersecurity summit hosted by the White House at Stanford University, Apple CEO Tim Cook made his position clear, that providing privacy was a moral stance: “History has shown us that sacrificing our right to privacy can have dire consequences. We still live in a world where all people are not treated equally. Too many people do not feel free to practice their religion or express their opinion or love who they choose, a world in which that information can make the difference between life and death.”

“If those of us in positions of responsibility fail to do everything in our power to protect the right of privacy we risk something far more valuable than money. We risk our way of life,” said Cook.

Apple isn’t alone in this. The Electronic Frontier Foundation lists a variety of applications that to a greater or lesser extent now encrypt communications in transit or end-to-end.

The backlash had begun to gather pace.

This unexpected shift towards greater privacy caught the intelligence services and law enforcement off guard. They suddenly found that easy sources of data had gone dark. Senior officials on both sides of the Atlantic began to warn that criminals and terrorists would be able to slip through their fingers. As GCHQ’s new director Robert Hannigan said:

“Techniques for encrypting messages or making them anonymous which were once the preserve of the most sophisticated criminals or nation states now come as standard. These are supplemented by freely available programs and apps adding extra layers of security, many of them proudly advertising that they are ‘Snowden approved’.”

He wasn’t alone in voicing such fears. Late last year, one of his predecessors, Sir David Omand, gave a similar warning to a government privacy and security inquiry.

“Post-Snowden, the companies are now making their devices technically inaccessible even to themselves.”

Another unexpected consequence of the revelations about Western intelligence agencies’ behaviour is that, unsurprisingly, other nations have also demanded access to encryption keys. That’s the problem with putting backdoors into secure systems: once one nation, law enforcement agency, or legal system has them — officially or unofficially — then everybody wants one.

For example, a new anti-terrorism law in China, which could be adopted into law in 2015, would require US technology firms that want to do business in the country to turn over their encryption keys and communications records to the government.

President Obama has complained about the proposed legislation, demonstrating neatly that one country’s dangerous backdoor security vulnerability is another country’s essential tool.

Sabre88 considers encryption as a BOON and not BANE. Lets live a life with security, and the right way to do this is by encrypting every other sensitive data.

 

Editor’s note:


Steve Ranger. “The undercover war on your internet secrets: How online surveillance cracked our trust in the web– TechRepublic”

TechRepublic. N.p., Web. 02 June. 2016.

 

The undercover war on your internet secrets

A black shrouded figure appears on the screen, looming over the rapt audience, talking about surveillance. But this is no Big Brother figure seeking obedience though, rather the opposite.Perhaps even his nemesis.

NSA contractor-turned-whistleblower Edward Snowden is explaining how his former employer and other intelligence agencies have worked to undermine privacy on the internet and beyond.

“We’re seeing systemic attacks on the fabrics of our systems, the fabric of our communications… by undermining the security of our communications, they enable surveillance,” he warns.

He is speaking at the conference via a video link from Russia, where he has taken refuge after leaking the documents detailing some of the NSA’s surveillance projects. The room behind him is in darkness, giving away nothing about his exact location.

“Surveillance is not possible when our movements and communications are safe and protected — a satellite cannot see you when you are inside your home — but an unprotected computer with an open webcam can,” he adds.

Edward Snowden speaking at the CeBIT tech show
Image: Deutsche Messe, Hannover

One of the most significant technologies being targeted by the intelligence services is encryption.

Online, encryption surrounds us, binds us, identifies us. It protects things like our credit card transactions and medical records, encoding them so that — unless you have the key — the data appears to be meaningless nonsense.

Encryption is one of the elemental forces of the web, even though it goes unnoticed and unremarked by the billions of people that use it every day.

But that doesn’t mean that the growth in the use of encryption isn’t controversial.

For some, strong encryption is the cornerstone of security and privacy in any digital communications, whether that’s for your selfies or for campaigners against an autocratic regime.

Others, mostly police and intelligence agencies, have become increasingly worried that the absolute secrecy that encryption provides could make it easier for criminals and terrorists to use the internet to plot without fear of discovery.

As such, the outcome of this war over privacy will have huge implications for the future of the web itself.

The code wars

Codes have been used to protect data in transit for thousands of years, and have long been a key tool in warfare: the Caesar cipher was named after the Roman emperor who used it to protect his military secrets from prying eyes.

These ciphers were extremely basic, of course: the Caesar cipher turned a message into code simply by replacing each letter with the one three down in the alphabet, so that ‘a’ became ‘d’.

Ciphers became more sophisticated, and harder to break, over the centuries, but it was the Second World War that demonstrated the real importance of encryption — and cracking it. The work done at Bletchley Park to crack German codes including Enigma had a famous impact on the course of the war.

As a result, once the war was over, encryption technology was put on the US Munitions List alongside tanks and guns as an ‘auxiliary military technology’, which put restrictions on its export.

The real fundamental problem is the internet and the protocol it’s all based on was never intended to be secure.” - ALAN WOODWARD, SURREY UNIVERSITY

In practice, these government controls didn’t make much difference to ordinary people, as there were few uses for code-making — that is, encryption — outside the military.

But all that changed with the arrival of the personal computer. It became an even bigger issue as the huge economic potential of the web became apparent.

“The internet and the protocol it’s all based on was never intended to be secure, so if we are going to rely on the internet as part of our critical national [and] international infrastructure, which we do, you’ve got to be able to secure it, and the only way to do that is to layer encryption over the top,” explains Professor Alan Woodward, a computer security expert at the University of Surrey.

Few would be willing to use online shopping if their credit card details, address, and what they were buying was being sent across the internet for any to see.

Encryption provides privacy by encoding data onto what appears to be meaningless junk, and it also creates trust by allowing us to prove who we are online — another essential element of doing business over the internet.

“A lot of cryptography isn’t just about keeping things secret, a lot of it is about proving identity,” says Bill Buchanan, professor of computing at Edinburgh Napier University. “There’s a lot of naïveté about cryptography as to thinking it’s just about keeping something safe on your disk.”

But the rise of the internet suddenly meant that access to cryptography became an issue of privacy and economics as well as one of national security, immediately sparking the clash that came to be known as ‘the crypto wars’.

Governments fought to control the use of encryption, while privacy advocates insisted its use was essential — not just for individual freedom, but also to protect the commercial development of the nascent internet.

What followed was a series of skirmishes, as the US government and others made increasingly desperate — and unsuccessful — efforts to reassert control over encryption technologies. One example in the mid-90s involved the NSA designing the Clipper chip, which was a way to give the agency access to the communications on any devices on which the chip was installed.

Another attempt at government control during this period came with the introduction of key escrow. Under the scheme, the US government would agree to license encryption providers, if they gave the state access to the keys used to decode communications.

On top of this were rules which only allowed products that used weak and easily-cracked encryption to be exported from the US.

Remarkably there was an unwelcome reminder of those days of watered-down encryption with the appearance of the recent FREAK flaw in the SSL security standard. The vulnerability could be used to force web browsers to default to the weaker “export-strength” encryption, which can be easily broken.

Few experts even knew that the option to use the weaker encryption still existed in the browsers commonly used today — a good example of the dangerous and unexpected consequences of attempts to control privacy technologies, long after the political decisions affecting it had been reversed and forgotten.

But by the early 2000s, it appeared that the privacy advocates had effectively won the crypto wars. The Clipper chip was abandoned, strong encryption software exports were allowed, key escrow failed, and governments realised it was all but impossible for them to control the use of encryption. It was understood that if they tried, the damage they would do to the internet economy would be too great.

Individual freedoms, and simple economics, had overwhelmed national security. In 2005, one campaigning group even cheerfully announced “The crypto wars are finally over and we won!”

They were wrong.

We now know that the crypto wars were never over. While privacy campaigners celebrated their victory, intelligence agencies were already at work breaking and undermining encryption. The second stage of the crypto wars — the spies’ secret war — had begun.

Editor’s note:


Steve Ranger. “The undercover war on your internet secrets: How online surveillance cracked our trust in the web– TechRepublic”

TechRepublic. N.p., Web. 26 May. 2016.

Are we safe?

Hack the Pentagon Program

Hackers found about 90 vulnerabilities in the Defense Department’s public websites as part of a highly touted bug bounty program, officials say. Those vulnerabilities included the ability to manipulate website content, “but nothing that was… earth-shattering” and worth shuttering the program over, according to Corey Harrison, a member of the department’s Defense Digital Service.

The two-week bounty program, which Defense Secretary Ash Carter announced in Silicon Valley in March, wrapped up last week and could be a springboard for similar programs across federal government.

DDS is made up of about 15 entrepreneurs and tech hands who are trying to get the defense bureaucracy to apply a startup mentality to specific projects. A sign hanging in their office reads: “Get shit done,” Harrison said. He described an informal atmosphere in which the team is free to experiment with new tools such as the messaging application Slack. But his team’s tinkering is in some respects a world apart from DOD programming. If the broader department were to use Slack, for example, lawyers would have to make sure the application complies with Freedom of Information Act regulations.

Even the name of the bug bounty program, Hack the Pentagon, was initially controversial. “They told us the name was a non-starter, which is awesome,” Harrison said. “That’s a great place to start.”

Harrison described overwhelming interest in the program — organizers expected a couple hundred hackers to register, but ultimately there were 1,400.

Corporate bug bounty programs can be lucrative for hackers. Yahoo for example, has paid security researchers $1.6 million since 2013 for bugs, including up to $15,000 per discovery, Christian Science Monitor’s Passcode reported.

That will be the maximum possible bug bounty in the Pentagon’s pilot project, too.  An estimated $75,000 total is available to pay hackers participating in the DOD program, he said, and officials are still parsing the program data to determine allotted payments. Yet some IT security experts have been critical of the DOD program. Robert Graham, a cybersecurity inventor and blogger, has asserted that DOD’s overtures to hackers have been undercut by the department’s discouragement of researchers from conducting their own scans of DOD assets.

“More than 250 million email accounts breached” – but how bad is it really?

Reuters just broke a story about a password breach said to affect more than 250 million webmail accounts around the world. The claims come from an American cyberinvestigation company that has reported on giant data breaches before: Hold Security.

The company’s founder, Alex Holden, reportedly told Reuters that: “The discovery of 272.3 million stolen accounts included a majority of users of Mail.ru, Russia’s most popular email service, and smaller fractions of Google, Yahoo and Microsoft email users.”

The database supposedly contained “credentials,” or what Reuters referred to as “usernames and passwords,” implying that the breached data might very well let crooks right into the affected accounts without further hacking or cracking.

Stolen email accounts are extremely useful to cyber-criminals. For example, they can read your messages before you do, putting them in a powerful position to scam your friends, family, debtors or creditors out of money by giving believable instructions to redirect payments to bogus bank accounts. They can learn a raft of important personal details about your life, making it much easier for them to defraud you by taking out loans in your name. Worst of all, they may be able to trigger password resets on your other online accounts, intercept the emails that come back, and take over those accounts as well.

How bad is it?

Unfortunately, we can’t yet tell you how serious this alleged breach really is. The good news, straight off the bat, is that the figure of “272.3 million stolen accounts” is some three or four times bigger than reality. Many of the accounts were repeated several times in the database, with Holden admitting that, after de-duplication, only 57,000,000 Mail.ru accounts remained, plus “tens of millions of credentials” for Google, Yahoo and Microsoft accounts.

More good news is that if the stolen data really does include the actual passwords used by the account holders, it’s highly unlikely – in fact, it’s as good as impossible – that the database came from security breaches at any of the webmail providers listed. Properly-run web services never store your actual password, because they don’t need to; instead, they store a cryptographic value known as a hash that can be computed from your password.

The idea is that if even if crooks manage to steal the whole password database, they can’t just read the passwords out of it.Instead, they have to guess repeatedly at each password, and compute the hash of each guess in turn, until they get a match.

Poorly chosen passwords can still be cracked, because the crooks try the most likely guesses first. But a reasonably complex password (something along the lines of IByoU/nvr/GE55, short for I bet you never guess) will take so long to turn up in the criminals’ “guess list” that it becomes as good as uncrackable, especially if you change your password soon after hearing about a breach. If the passwords in this case are real, it seems likely that they were stolen directly from users as they typed them in, for example by means of malware known as a keylogger that covertly keeps track of your keystrokes.

The Linkedin Chaos

Millions of LinkedIn passwords up for sale on the dark web.

Did you change your LinkedIn password after that massive 2012 leak of millions of passwords, which were subsequently posted online and cracked within hours? If not, you better hop to it, most particularly if you reuse passwords on other sites (and please tell us you don’t)

The news isn’t good: first off, what was initially thought to be a “massive” breach turns out to have been more like a massive breach that’s mainlining steroids. At the time of the breach 4 years ago, “only” 6.5 million encrypted (but not salted!) passwords had been posted online. But now, there are a way-more-whopping 117 million LinkedIn account emails and passwords up for sale.

As Motherboard reports, somebody going by the name of “Peace” says the data was stolen during the 2012 breach. LinkedIn never did spell out exactly how many users were affected by that breach. In fact, LinkedIn spokesperson Hani Durzy told Motherboard that the company doesn’t actually know how many accounts were involved. Regardless, it appears that it’s far worse than anybody thought. Motherboard said that the stolen data’s up for sale on one site and in the possession of another.

The first is a dark web marketplace called The Real Deal that’s said to sell not only drugs and digital goods such as credit cards, but also hacking tools such as zero days and other exploits. Peace has listed some 167 million LinkedIn accounts on that marketplace with an asking price of 5 bitcoin, or around $2,200. The second place that apparently has the data is LeakedSource, a subscription-based search tool that lets people search for their leaked data. LeakedSource says it has 167,370,910 LinkedIn emails and passwords. Out of those 167 million accounts, 117 million have both emails and encrypted passwords, according to Motherboard.
Cialis from pharmtechi.com pharmacy is a great med! 5 years ago I had a girlfriend that I had to work to get her to the top )) The drug helped, it really works for 36 hours .. I was stunned!
A LeakedSource operator told Motherboard’s Lorenzo Franceschi-Bicchierai that so far, they’d cracked “90% of the passwords in 72 hours.” As far as verification goes, LinkedIn confirmed that the data’s legitimate.

On Wednesday, LinkedIn’s chief information security officer Cory Scott posted this blog post about the logins now up for sale:

“Yesterday, we became aware of an additional set of data that had just been released that claims to be email and hashed password combinations of more than 100 million LinkedIn members from that same theft in 2012. We are taking immediate steps to invalidate the passwords of the accounts impacted, and we will contact those members to reset their passwords. We have no indication that this is as a result of a new security breach.”

Federal Agencies Hope to Bid Farewell to Conventional Passwords

No matter how clever and well-constructed your current passwords may be, they may become obsolete under new guidance for federal system authentication. Indeed, in a recent GitHub public preview document, the National Institute of Standards and Technology (NIST) says it will offer dramatic changes to its guidelines for federal agencies’ digital authentication methods.

In its new approach, NIST is transforming its current approach to identity-proofing to best suit the current Office of Management and Budget (OPM) guidance by helping agencies choose the most ultraprecise digital authentication technologies. This approach includes differentiating individual components of identity verification into inconspicuous, component elements. Using NIST’s process, individuals would establish their identity through what is called identity assurance and validate their credentials to gain entry into a given system through authenticator assurance—possibly a chip card or encrypted identity card (www.FCW.com).

Furthermore, to ensure absolute security, the document states that passwords could become entirely numeric as security experts believe that combining digits, letters and symbols in conventional passwords has thus far proved insignificant in protecting user information despite the impact on usability and memorability. Contrastingly, the NIST advises that passwords be tested against a list of unacceptable passwords. Unacceptable passwords are identified as those used in previous breaches, dictionary words, specific words, and specific names that users are most like to choose.

To further guarantee security and protection, users will not be able to have a password “hint” that is ultimately accessible to unauthenticated personnel. In other words, the familiar “first elementary school” or “name of first pet” password prompt will cease to exist.

Although these changes to password security will take place among federal agencies, many Americans will not have this level of user authentication. Thus, the infographic below includes a variety of useful tips and instruction on how to create a breach-proof password:

According to the NIST, these technologically advanced guidelines for password security and user authentication “should have a tested equal error rate of 1 in 1,000 or better, with a false-match rate of 1 in 1,000 or better” (www.FCW.com). When the NIST implements these new guidelines, federal government user data will not only have a greater level of security, it will also offer unprecedented protection to national confidential data from malicious data breaches, hackers, and cyber-attacks.

Road to Super Intelligence

Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die!

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced.

“We are on the edge of change comparable to the rise of human life on Earth” — Vernor Vinge

There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. For example, AI will create new jobs while phasing out some old ones—magnifying the importance of programs like TechHire that are preparing our workforce with the skills to get ahead in today’s economy, and tomorrow’s. AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery—adding to the challenge of predicting and controlling how complex technologies will behave.

There are tremendous opportunities and an array of considerations across the Federal Government in privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.

That is why the White House Office of Science and Technology Policy announced public workshops over the coming months on topics in AI to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology.

The Federal Government also is working to leverage AI for public good and toward a more effective government. A new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence will meet for the first time next week. This group will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

Broadly, between now and the end of the Administration, the NSTC group will work to increase the use of AI and machine learning to improve the delivery of government services. Such efforts may include empowering Federal departments and agencies to run pilot projects evaluating new AI-driven approaches and government investment in research on how to use AI to make government services more effective. Applications in AI to areas of government that are not traditionally technology-focused are especially significant; there is tremendous potential in AI-driven improvements to programs and delivery of services that help make everyday life better for Americans in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, the environment, and much more.

Editor’s note: Ideas inspired from,


Ed Felten. “Preparing for the future of Artificial Intelligence– WhiteHouse”

WhiteHouse.gov. N.p., Web. 5 May. 2016.

Most Wanted: Catching a Cybercriminal

Most in the United States are aware that the worst criminal offenders around the globe appear on the Federal Bureau of Investigation’s (FBI) “Most Wanted Fugitives” list, the “Most Wanted Terrorists” list, or the “Wanted by the FBI” podcast. But what about cybercriminals? As hackers and cybercriminals become more advanced in their hacking techniques, the FBI’s investigative team is less concerned with the identity of the perpetrator than they are in preventing access to its system in the first place. However, the FBI has recently begun to target the individual perpetrators of cybercrime.

Given the ubiquity of cybercrime in the age of technology, identifying cybercriminals on the agencies’ list of “Most Wanted Cybercriminals” has grown considerably since March. In fact, the list grew by nearly 50 percent when two young Syrians were charged for attempting to hack United States companies and media organizations followed by the indictment of seven Iranian citizens accused of coordinating a months-long cyber attack on financial organizations located in New York. When Attorney General Loretta Lynch announced the aforementioned indictments, she said the decision to provide public access to the most-wanted cybercriminals is a “new approach” at the of Justice that falls in line with its name-and-shame campaign (www.nextgov.com). The campaign Department, which launched in 2012, placed five Chinese hackers on the cyber most-wanted list and so far all of those listed are men and are mostly foreign nationals.

The infographic below explains why the United States needs more cybersecurity professionals to thwart cybercriminals:

As we currently live in the age of digital technology, the United States hopes to take a more concerted efforts to protect both public and private data. Without taking such proactive measures to provide protection, the United States digital infrastructure may increasingly become the target of malicious cybercriminal breaches, hacks, and cyberattacks.

 

 

Open Data can do more than what government thinks it could.

Government thinks open data is an add-on that boosts transparency, but it’s more than that. Most open data portals don’t look like labors of love. They look like abandoned last-minute science fair project. The current open data movement is more than a decade old, but some are still asking why they should even bother.

“Right now, it is irrational for almost anybody who works in government to open data. It makes no sense,” Waldo Jaquith said. “Most people, it’s not in their job description to open data — they’re just the CIO. So if he fails to open data, worst case, nothing bad happens. But if he does open some data and it has PII [personally identifying information], then his worst case is that he’s hauled before a legislative subcommittee, grilled, humiliated and fired.”

Though perhaps it’s not immediately apparent, Jaquith is the director of U.S. Open Data and one of the movement’s most active advocates. Open data is struggling to gain financial and spiritual backing. Open data may fizzle out within the next two years, said Jaquith, and a glance at government’s attitude toward the entire “open” concept supports that timeline.

The people who are really into open data — like Jaquith — aren’t the fad-following type. Open data’s disciples believe in it because they’ve seen that just a little prodding in the right spots can make a big difference. In 2014, Jaquith bought a temporary license for Virginia’s business registration data for $450 and published the records online. That data wasn’t just news to the public — it had been kept from Virginia’s municipal governments too. Before that, the state’s municipal governments had no way of knowing which businesses existed within their boundaries and, therefore, they had no way of knowing which businesses weren’t paying license fees and property taxes. Jaquith estimated (“wildly,” he admits) that this single data set is worth $100 million to Virginia’s municipal governments collectively.

The disconnect between the massive operational potential that open data holds and government’s slow movement toward harnessing it can be explained simply. Government thinks open data is an add-on that boosts transparency, but it’s more than that. Open data isn’t a $2 side of guacamole that adds flavor to the burrito. It’s the restaurant’s mission statement.

Here are six ideas that can help government more fully realize open data’s transformative power.

1. RECONSIDER YOUR DATA’S PURPOSE

Open data isn’t just about transparency and economic development. If it were, those things would have happened by now. People still largely don’t know what their governments are doing and no one’s frequenting their city’s open data portal to find out — they read the news. Open data portals haven’t stopped corruption; the unscrupulous simply reroute their activities around the spotlight. And if anyone’s using open data to build groundbreaking apps that improve the world and generate industry, they’re doing a great job keeping it a secret. For government, open data is about working smarter.

“I’m tired of the argument of ‘Oh, it will unlock value to the private sector,’” Jaquith said. “That’s nice. I hope people make billions of dollars off of that. But nobody in any government is going to spend any real amount of time on all the work that goes into opening all the data sets on a sustainable, complete basis because some stranger somewhere might get rich.”

Open data’s most basic advantage is that it makes life easier for government workers. Information that’s requested regularly can be put online, freeing workers to do other tasks. At its best, open data uncovers interjurisdictional insights that save money and improve operations. And no matter how tenuous, peripheral bonuses like transparency and economic development are still there too. Governments aren’t gaining the benefits of open data today because there’s not been a rigorous effort to integrate the concept of openness into public-sector work.

One unnamed city that ranks respectably in the U.S. City Open Data Census has more than 1,000 records on its open data portal. But only 132 of those records are data sets and 86 of those data sets are pieces of a single budget that have been split apart. This is a common practice across the public sector and one that reveals intent. For the most part, governments aren’t publishing their data because they know it’s a useful resource that ought to be easily accessible, well curated, neat and current so that it can be used by all. It’s because 1,000 sounds better than 50 when an official is giving a speech or addressing stakeholders, and they’re not the ones who have to use it.

2. CONSUME YOUR OWN OPEN DATA

Governments use data. Open data portals are designed for displaying and sharing information in an organized way. Therefore, governments should use a tool designed for the thing they’re trying to do. Even putting aside the “open” concept, public-sector offices around the nation would benefit hugely from having a common, shared pool of data they can draw upon when they need reliable information. Putting the data online is the most practical way to do that — and it also happens to meet the political dictates of transparency — but government should be doing this for its own sake.

“The most common mistake I see governments make with open data is thinking that publication is the end of the activity, rather than beginning of the activity,” said Dan O’Neil, executive director of the Smart Chicago Collaborative. “Because publishing data can be, if we live in a perfect world, simply a prefatory step to allowing residents to talk about how data affects their lives and helps them live better. But usually, what happens is they publish data and they run as fast as they can in the other direction.”

3. PLAN BEYOND TECHNOLOGY

Open data has outgrown the novelty phase, and that means it needs organizational and policy support to survive. It needs comprehensive planning and believers who will act. People wouldn’t be giving up much if they abandoned open data today, O’Neil said, because open data hasn’t done much. The tragedy of giving up now, he said, would purely be a loss of prospect, because open data could change the world if the focus were shifted away from technology and toward the needs of the people.

An organization called City Bureau is attempting to encourage young non-white people to become reporters in an attempt to restore balance to journalistic coverage on the south and west sides of Chicago. Another journalistic endeavor on Chicago’s South Side called Invisible Institute serves as a watchdog organization that uses investigative reporting, litigation and public discussion to further its civil rights goals. O’Neil’s world is one of civic tech and social justice, but regardless of whether a person supports these particular groups ideologically, everyone can learn from their approach.

“That’s where it’s at,” O’Neil said. “Getting data that isn’t open and making it open and then having an actual community strategy around analyzing not just the data, but the social justice issues around the general milieu.”

Government needs to do the same if open data is to find meaning. Just putting data online and hoping for the best isn’t wrong, but it doesn’t do much. Open data needs a clear plan, and it needs to come from a wide patronage within government.

“The most common mistake is focusing on the project over the practice,” said Will Saunders, Washington state’s open data guy (his actual title). “It’s always attractive to have an executive sponsor, and a lot of times open data projects get started as a transparency commitment, as ‘a hallmark of my administration’ kind of thing. [Sometimes] you wind up having a diligent, small group of folks who facilitate the publication of data and then if there’s a leadership change in three or four years, then a lot of the sustainability just isn’t there.”

4. AUTOMATE SLOWLY

Washington could be publishing three to four times more data than it is today, Saunders said, but the state doesn’t because longevity through automation ensures the efforts will stick.

“Program managers know that they can and should publish, and when they do, they tend to link it to their own programmatic goals as opposed to a specific political commitment,” he said. “What I typically do is work with agencies to see if there’s a way I can encourage them to make publication part of their program design, and if I can’t, then I wait for another day.”

This approach is slower, but like proper diet and exercise, https://dietstrict.com/ experts recommend it because it works.

Open data’s relevance will grow only if efforts mature. In Washington and elsewhere, data sets are often used for purposes different from what was originally intended. Opportunities to repurpose data will appear more frequently as the information becomes better organized, shared and understood. One severe obstacle to that prospect is that today there exist few standard schemas for publishing data. Roads, for instance, cross every boundary the nation has, and yet road data takes a new format in each jurisdiction. Today, without standards, a large project that uses open road data sounds like more trouble than it’s worth.

5. COLLABORATE ON THE CREATION OF PUBLISHING STANDARDS

Government has a hard time following publishing standards today because not many exist. ThePresident’s Task Force on 21st Century Policing is developing some standards for police data, Data.gov is working toward a standard that will let companies like Uber publish their ride data meaningfully, and programs like Bloomberg’s What Works Cities initiative are positioned to develop standards across city lines. Comprehensive and accessible publishing rules would reduce the work required of freeing data sets, and it would solve many of today’s data sharing and comprehension snags.

6. TRUST YOUR EXPERTS

The public isn’t qualified to tell the government how it should be using its data, because the public doesn’t understand government. Most people think “the government” means the president or Congress. No one understands the challenges of government better than those who run it and those are the people who should guide the use of public-sector data.

Utah is growing its open data automation daily under the guidance of experts. The technology office monitors which data sets its offices need and educates stakeholders on how to use that information. The state auditor, the health-care system and external data requesters are among those learning, said Dave Fletcher, Utah’s CTO.

“Increasingly we’re working on an initiative that we’re calling data-driven government to make better decisions based on data,” Fletcher said, adding that they share statewide data with counties so information like graduation rates, unemployment rates, taxes and air quality measures are easily accessed by commissioners.

Drew Mingl, Utah’s open data coordinator, said people are grateful to have a definitive centralized source of state information that can yield new insights. Data now being drawn from the state’s Medicare system, for example, showed a $25,000 deviation in the cost of hip replacement surgery in two neighboring counties.

“People are now making better, more informed decisions because we’ve put all this state data in one place where they can get access to it,” Mingl said.

Los Angeles runs one of the best open data portals in the nation. It ranks first on the U.S. City Open Data Census, with nearly 100 percent of the city’s data open to the public. It’s not perfect, but what it has, it gained through the knowledge of the city’s experienced workers.

Ted Ross, general manager of L.A.’s Information Technology Agency, said the city wanted three things from its portal: a way for average citizens to view data casually, capabilities for data scientists who wanted to do more with the data, like download it or use APIs, and the ability to integrate federated data sets from across systems. Contracting a vendor was the easiest way to reach those goals, Ross said, so rather than develop the portal in-house, that’s what Los Angeles did.

The city listens to the people who use data most to guide its efforts: journalists, researchers, officials and technology staff, Ross said. This feedback ensures the city’s doing more than fulfilling a political mandate, he said.

L.A. has done more with its data than leave it dangling. Vision Zero, a multinational road safety program, promotes roadway design to reduce pedestrian injury and death, and it’s powered by the city’s open data.

“We worked with USC, who volunteered about 25 graduate-level data science students and three professors, and we basically analyzed for causation and commonality, and trends relating to those, and they can help identify some of the high-value networks,” Ross said. “That’s a prime example of taking open data and … using it as a platform to interact with a local university and actually identify information and insight that’s being leveraged to save lives.”

Open data doesn’t need to save lives — and it usually won’t. Its value is in supporting the core functions of government, which are basic things like keeping parks and water clean and trash cans empty, said Josh Baron, applications delivery manager for Ann Arbor, Mich., and that should be the goal of everyone who works in government.

“Our No. 1 job,” Baron said, “is to support the lines of business who are out there making the city a wonderful place to live.”

 

Editor’s note:


Colin Wood. “6 ideas to help government realize open data’s transformative power– GovTech”

GovTech. N.p., Web. 21 Apr. 2016.