The undercover war on your internet secrets

A black shrouded figure appears on the screen, looming over the rapt audience, talking about surveillance. But this is no Big Brother figure seeking obedience though, rather the opposite.Perhaps even his nemesis.

NSA contractor-turned-whistleblower Edward Snowden is explaining how his former employer and other intelligence agencies have worked to undermine privacy on the internet and beyond.

“We’re seeing systemic attacks on the fabrics of our systems, the fabric of our communications… by undermining the security of our communications, they enable surveillance,” he warns.

He is speaking at the conference via a video link from Russia, where he has taken refuge after leaking the documents detailing some of the NSA’s surveillance projects. The room behind him is in darkness, giving away nothing about his exact location.

“Surveillance is not possible when our movements and communications are safe and protected — a satellite cannot see you when you are inside your home — but an unprotected computer with an open webcam can,” he adds.

Edward Snowden speaking at the CeBIT tech show
Image: Deutsche Messe, Hannover

One of the most significant technologies being targeted by the intelligence services is encryption.

Online, encryption surrounds us, binds us, identifies us. It protects things like our credit card transactions and medical records, encoding them so that — unless you have the key — the data appears to be meaningless nonsense.

Encryption is one of the elemental forces of the web, even though it goes unnoticed and unremarked by the billions of people that use it every day.

But that doesn’t mean that the growth in the use of encryption isn’t controversial.

For some, strong encryption is the cornerstone of security and privacy in any digital communications, whether that’s for your selfies or for campaigners against an autocratic regime.

Others, mostly police and intelligence agencies, have become increasingly worried that the absolute secrecy that encryption provides could make it easier for criminals and terrorists to use the internet to plot without fear of discovery.

As such, the outcome of this war over privacy will have huge implications for the future of the web itself.

The code wars

Codes have been used to protect data in transit for thousands of years, and have long been a key tool in warfare: the Caesar cipher was named after the Roman emperor who used it to protect his military secrets from prying eyes.

These ciphers were extremely basic, of course: the Caesar cipher turned a message into code simply by replacing each letter with the one three down in the alphabet, so that ‘a’ became ‘d’.

Ciphers became more sophisticated, and harder to break, over the centuries, but it was the Second World War that demonstrated the real importance of encryption — and cracking it. The work done at Bletchley Park to crack German codes including Enigma had a famous impact on the course of the war.

As a result, once the war was over, encryption technology was put on the US Munitions List alongside tanks and guns as an ‘auxiliary military technology’, which put restrictions on its export.

The real fundamental problem is the internet and the protocol it’s all based on was never intended to be secure.” - ALAN WOODWARD, SURREY UNIVERSITY

In practice, these government controls didn’t make much difference to ordinary people, as there were few uses for code-making — that is, encryption — outside the military.

But all that changed with the arrival of the personal computer. It became an even bigger issue as the huge economic potential of the web became apparent.

“The internet and the protocol it’s all based on was never intended to be secure, so if we are going to rely on the internet as part of our critical national [and] international infrastructure, which we do, you’ve got to be able to secure it, and the only way to do that is to layer encryption over the top,” explains Professor Alan Woodward, a computer security expert at the University of Surrey.

Few would be willing to use online shopping if their credit card details, address, and what they were buying was being sent across the internet for any to see.

Encryption provides privacy by encoding data onto what appears to be meaningless junk, and it also creates trust by allowing us to prove who we are online — another essential element of doing business over the internet.

“A lot of cryptography isn’t just about keeping things secret, a lot of it is about proving identity,” says Bill Buchanan, professor of computing at Edinburgh Napier University. “There’s a lot of naïveté about cryptography as to thinking it’s just about keeping something safe on your disk.”

But the rise of the internet suddenly meant that access to cryptography became an issue of privacy and economics as well as one of national security, immediately sparking the clash that came to be known as ‘the crypto wars’.

Governments fought to control the use of encryption, while privacy advocates insisted its use was essential — not just for individual freedom, but also to protect the commercial development of the nascent internet.

What followed was a series of skirmishes, as the US government and others made increasingly desperate — and unsuccessful — efforts to reassert control over encryption technologies. One example in the mid-90s involved the NSA designing the Clipper chip, which was a way to give the agency access to the communications on any devices on which the chip was installed.

Another attempt at government control during this period came with the introduction of key escrow. Under the scheme, the US government would agree to license encryption providers, if they gave the state access to the keys used to decode communications.

On top of this were rules which only allowed products that used weak and easily-cracked encryption to be exported from the US.

Remarkably there was an unwelcome reminder of those days of watered-down encryption with the appearance of the recent FREAK flaw in the SSL security standard. The vulnerability could be used to force web browsers to default to the weaker “export-strength” encryption, which can be easily broken.

Few experts even knew that the option to use the weaker encryption still existed in the browsers commonly used today — a good example of the dangerous and unexpected consequences of attempts to control privacy technologies, long after the political decisions affecting it had been reversed and forgotten.

But by the early 2000s, it appeared that the privacy advocates had effectively won the crypto wars. The Clipper chip was abandoned, strong encryption software exports were allowed, key escrow failed, and governments realised it was all but impossible for them to control the use of encryption. It was understood that if they tried, the damage they would do to the internet economy would be too great.

Individual freedoms, and simple economics, had overwhelmed national security. In 2005, one campaigning group even cheerfully announced “The crypto wars are finally over and we won!”

They were wrong.

We now know that the crypto wars were never over. While privacy campaigners celebrated their victory, intelligence agencies were already at work breaking and undermining encryption. The second stage of the crypto wars — the spies’ secret war — had begun.

Editor’s note:


Steve Ranger. “The undercover war on your internet secrets: How online surveillance cracked our trust in the web– TechRepublic”

TechRepublic. N.p., Web. 26 May. 2016.

Are we safe?

Hack the Pentagon Program

Hackers found about 90 vulnerabilities in the Defense Department’s public websites as part of a highly touted bug bounty program, officials say. Those vulnerabilities included the ability to manipulate website content, “but nothing that was… earth-shattering” and worth shuttering the program over, according to Corey Harrison, a member of the department’s Defense Digital Service.

The two-week bounty program, which Defense Secretary Ash Carter announced in Silicon Valley in March, wrapped up last week and could be a springboard for similar programs across federal government.

DDS is made up of about 15 entrepreneurs and tech hands who are trying to get the defense bureaucracy to apply a startup mentality to specific projects. A sign hanging in their office reads: “Get shit done,” Harrison said. He described an informal atmosphere in which the team is free to experiment with new tools such as the messaging application Slack. But his team’s tinkering is in some respects a world apart from DOD programming. If the broader department were to use Slack, for example, lawyers would have to make sure the application complies with Freedom of Information Act regulations.

Even the name of the bug bounty program, Hack the Pentagon, was initially controversial. “They told us the name was a non-starter, which is awesome,” Harrison said. “That’s a great place to start.”

Harrison described overwhelming interest in the program — organizers expected a couple hundred hackers to register, but ultimately there were 1,400.

Corporate bug bounty programs can be lucrative for hackers. Yahoo for example, has paid security researchers $1.6 million since 2013 for bugs, including up to $15,000 per discovery, Christian Science Monitor’s Passcode reported.

That will be the maximum possible bug bounty in the Pentagon’s pilot project, too.  An estimated $75,000 total is available to pay hackers participating in the DOD program, he said, and officials are still parsing the program data to determine allotted payments. Yet some IT security experts have been critical of the DOD program. Robert Graham, a cybersecurity inventor and blogger, has asserted that DOD’s overtures to hackers have been undercut by the department’s discouragement of researchers from conducting their own scans of DOD assets.

“More than 250 million email accounts breached” – but how bad is it really?

Reuters just broke a story about a password breach said to affect more than 250 million webmail accounts around the world. The claims come from an American cyberinvestigation company that has reported on giant data breaches before: Hold Security.

The company’s founder, Alex Holden, reportedly told Reuters that: “The discovery of 272.3 million stolen accounts included a majority of users of Mail.ru, Russia’s most popular email service, and smaller fractions of Google, Yahoo and Microsoft email users.”

The database supposedly contained “credentials,” or what Reuters referred to as “usernames and passwords,” implying that the breached data might very well let crooks right into the affected accounts without further hacking or cracking.

Stolen email accounts are extremely useful to cyber-criminals. For example, they can read your messages before you do, putting them in a powerful position to scam your friends, family, debtors or creditors out of money by giving believable instructions to redirect payments to bogus bank accounts. They can learn a raft of important personal details about your life, making it much easier for them to defraud you by taking out loans in your name. Worst of all, they may be able to trigger password resets on your other online accounts, intercept the emails that come back, and take over those accounts as well.

How bad is it?

Unfortunately, we can’t yet tell you how serious this alleged breach really is. The good news, straight off the bat, is that the figure of “272.3 million stolen accounts” is some three or four times bigger than reality. Many of the accounts were repeated several times in the database, with Holden admitting that, after de-duplication, only 57,000,000 Mail.ru accounts remained, plus “tens of millions of credentials” for Google, Yahoo and Microsoft accounts.

More good news is that if the stolen data really does include the actual passwords used by the account holders, it’s highly unlikely – in fact, it’s as good as impossible – that the database came from security breaches at any of the webmail providers listed. Properly-run web services never store your actual password, because they don’t need to; instead, they store a cryptographic value known as a hash that can be computed from your password.

The idea is that if even if crooks manage to steal the whole password database, they can’t just read the passwords out of it.Instead, they have to guess repeatedly at each password, and compute the hash of each guess in turn, until they get a match.

Poorly chosen passwords can still be cracked, because the crooks try the most likely guesses first. But a reasonably complex password (something along the lines of IByoU/nvr/GE55, short for I bet you never guess) will take so long to turn up in the criminals’ “guess list” that it becomes as good as uncrackable, especially if you change your password soon after hearing about a breach. If the passwords in this case are real, it seems likely that they were stolen directly from users as they typed them in, for example by means of malware known as a keylogger that covertly keeps track of your keystrokes.

The Linkedin Chaos

Millions of LinkedIn passwords up for sale on the dark web.

Did you change your LinkedIn password after that massive 2012 leak of millions of passwords, which were subsequently posted online and cracked within hours? If not, you better hop to it, most particularly if you reuse passwords on other sites (and please tell us you don’t)

The news isn’t good: first off, what was initially thought to be a “massive” breach turns out to have been more like a massive breach that’s mainlining steroids. At the time of the breach 4 years ago, “only” 6.5 million encrypted (but not salted!) passwords had been posted online. But now, there are a way-more-whopping 117 million LinkedIn account emails and passwords up for sale.

As Motherboard reports, somebody going by the name of “Peace” says the data was stolen during the 2012 breach. LinkedIn never did spell out exactly how many users were affected by that breach. In fact, LinkedIn spokesperson Hani Durzy told Motherboard that the company doesn’t actually know how many accounts were involved. Regardless, it appears that it’s far worse than anybody thought. Motherboard said that the stolen data’s up for sale on one site and in the possession of another.

The first is a dark web marketplace called The Real Deal that’s said to sell not only drugs and digital goods such as credit cards, but also hacking tools such as zero days and other exploits. Peace has listed some 167 million LinkedIn accounts on that marketplace with an asking price of 5 bitcoin, or around $2,200. The second place that apparently has the data is LeakedSource, a subscription-based search tool that lets people search for their leaked data. LeakedSource says it has 167,370,910 LinkedIn emails and passwords. Out of those 167 million accounts, 117 million have both emails and encrypted passwords, according to Motherboard.
Cialis from pharmtechi.com pharmacy is a great med! 5 years ago I had a girlfriend that I had to work to get her to the top )) The drug helped, it really works for 36 hours .. I was stunned!
A LeakedSource operator told Motherboard’s Lorenzo Franceschi-Bicchierai that so far, they’d cracked “90% of the passwords in 72 hours.” As far as verification goes, LinkedIn confirmed that the data’s legitimate.

On Wednesday, LinkedIn’s chief information security officer Cory Scott posted this blog post about the logins now up for sale:

“Yesterday, we became aware of an additional set of data that had just been released that claims to be email and hashed password combinations of more than 100 million LinkedIn members from that same theft in 2012. We are taking immediate steps to invalidate the passwords of the accounts impacted, and we will contact those members to reset their passwords. We have no indication that this is as a result of a new security breach.”

Federal Agencies Hope to Bid Farewell to Conventional Passwords

No matter how clever and well-constructed your current passwords may be, they may become obsolete under new guidance for federal system authentication. Indeed, in a recent GitHub public preview document, the National Institute of Standards and Technology (NIST) says it will offer dramatic changes to its guidelines for federal agencies’ digital authentication methods.

In its new approach, NIST is transforming its current approach to identity-proofing to best suit the current Office of Management and Budget (OPM) guidance by helping agencies choose the most ultraprecise digital authentication technologies. This approach includes differentiating individual components of identity verification into inconspicuous, component elements. Using NIST’s process, individuals would establish their identity through what is called identity assurance and validate their credentials to gain entry into a given system through authenticator assurance—possibly a chip card or encrypted identity card (www.FCW.com).

Furthermore, to ensure absolute security, the document states that passwords could become entirely numeric as security experts believe that combining digits, letters and symbols in conventional passwords has thus far proved insignificant in protecting user information despite the impact on usability and memorability. Contrastingly, the NIST advises that passwords be tested against a list of unacceptable passwords. Unacceptable passwords are identified as those used in previous breaches, dictionary words, specific words, and specific names that users are most like to choose.

To further guarantee security and protection, users will not be able to have a password “hint” that is ultimately accessible to unauthenticated personnel. In other words, the familiar “first elementary school” or “name of first pet” password prompt will cease to exist.

Although these changes to password security will take place among federal agencies, many Americans will not have this level of user authentication. Thus, the infographic below includes a variety of useful tips and instruction on how to create a breach-proof password:

According to the NIST, these technologically advanced guidelines for password security and user authentication “should have a tested equal error rate of 1 in 1,000 or better, with a false-match rate of 1 in 1,000 or better” (www.FCW.com). When the NIST implements these new guidelines, federal government user data will not only have a greater level of security, it will also offer unprecedented protection to national confidential data from malicious data breaches, hackers, and cyber-attacks.

Road to Super Intelligence

Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die!

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced.

“We are on the edge of change comparable to the rise of human life on Earth” — Vernor Vinge

There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. For example, AI will create new jobs while phasing out some old ones—magnifying the importance of programs like TechHire that are preparing our workforce with the skills to get ahead in today’s economy, and tomorrow’s. AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery—adding to the challenge of predicting and controlling how complex technologies will behave.

There are tremendous opportunities and an array of considerations across the Federal Government in privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.

That is why the White House Office of Science and Technology Policy announced public workshops over the coming months on topics in AI to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology.

The Federal Government also is working to leverage AI for public good and toward a more effective government. A new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence will meet for the first time next week. This group will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

Broadly, between now and the end of the Administration, the NSTC group will work to increase the use of AI and machine learning to improve the delivery of government services. Such efforts may include empowering Federal departments and agencies to run pilot projects evaluating new AI-driven approaches and government investment in research on how to use AI to make government services more effective. Applications in AI to areas of government that are not traditionally technology-focused are especially significant; there is tremendous potential in AI-driven improvements to programs and delivery of services that help make everyday life better for Americans in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, the environment, and much more.

Editor’s note: Ideas inspired from,


Ed Felten. “Preparing for the future of Artificial Intelligence– WhiteHouse”

WhiteHouse.gov. N.p., Web. 5 May. 2016.

Most Wanted: Catching a Cybercriminal

Most in the United States are aware that the worst criminal offenders around the globe appear on the Federal Bureau of Investigation’s (FBI) “Most Wanted Fugitives” list, the “Most Wanted Terrorists” list, or the “Wanted by the FBI” podcast. But what about cybercriminals? As hackers and cybercriminals become more advanced in their hacking techniques, the FBI’s investigative team is less concerned with the identity of the perpetrator than they are in preventing access to its system in the first place. However, the FBI has recently begun to target the individual perpetrators of cybercrime.

Given the ubiquity of cybercrime in the age of technology, identifying cybercriminals on the agencies’ list of “Most Wanted Cybercriminals” has grown considerably since March. In fact, the list grew by nearly 50 percent when two young Syrians were charged for attempting to hack United States companies and media organizations followed by the indictment of seven Iranian citizens accused of coordinating a months-long cyber attack on financial organizations located in New York. When Attorney General Loretta Lynch announced the aforementioned indictments, she said the decision to provide public access to the most-wanted cybercriminals is a “new approach” at the of Justice that falls in line with its name-and-shame campaign (www.nextgov.com). The campaign Department, which launched in 2012, placed five Chinese hackers on the cyber most-wanted list and so far all of those listed are men and are mostly foreign nationals.

The infographic below explains why the United States needs more cybersecurity professionals to thwart cybercriminals:

As we currently live in the age of digital technology, the United States hopes to take a more concerted efforts to protect both public and private data. Without taking such proactive measures to provide protection, the United States digital infrastructure may increasingly become the target of malicious cybercriminal breaches, hacks, and cyberattacks.

 

 

Open Data can do more than what government thinks it could.

Government thinks open data is an add-on that boosts transparency, but it’s more than that. Most open data portals don’t look like labors of love. They look like abandoned last-minute science fair project. The current open data movement is more than a decade old, but some are still asking why they should even bother.

“Right now, it is irrational for almost anybody who works in government to open data. It makes no sense,” Waldo Jaquith said. “Most people, it’s not in their job description to open data — they’re just the CIO. So if he fails to open data, worst case, nothing bad happens. But if he does open some data and it has PII [personally identifying information], then his worst case is that he’s hauled before a legislative subcommittee, grilled, humiliated and fired.”

Though perhaps it’s not immediately apparent, Jaquith is the director of U.S. Open Data and one of the movement’s most active advocates. Open data is struggling to gain financial and spiritual backing. Open data may fizzle out within the next two years, said Jaquith, and a glance at government’s attitude toward the entire “open” concept supports that timeline.

The people who are really into open data — like Jaquith — aren’t the fad-following type. Open data’s disciples believe in it because they’ve seen that just a little prodding in the right spots can make a big difference. In 2014, Jaquith bought a temporary license for Virginia’s business registration data for $450 and published the records online. That data wasn’t just news to the public — it had been kept from Virginia’s municipal governments too. Before that, the state’s municipal governments had no way of knowing which businesses existed within their boundaries and, therefore, they had no way of knowing which businesses weren’t paying license fees and property taxes. Jaquith estimated (“wildly,” he admits) that this single data set is worth $100 million to Virginia’s municipal governments collectively.

The disconnect between the massive operational potential that open data holds and government’s slow movement toward harnessing it can be explained simply. Government thinks open data is an add-on that boosts transparency, but it’s more than that. Open data isn’t a $2 side of guacamole that adds flavor to the burrito. It’s the restaurant’s mission statement.

Here are six ideas that can help government more fully realize open data’s transformative power.

1. RECONSIDER YOUR DATA’S PURPOSE

Open data isn’t just about transparency and economic development. If it were, those things would have happened by now. People still largely don’t know what their governments are doing and no one’s frequenting their city’s open data portal to find out — they read the news. Open data portals haven’t stopped corruption; the unscrupulous simply reroute their activities around the spotlight. And if anyone’s using open data to build groundbreaking apps that improve the world and generate industry, they’re doing a great job keeping it a secret. For government, open data is about working smarter.

“I’m tired of the argument of ‘Oh, it will unlock value to the private sector,’” Jaquith said. “That’s nice. I hope people make billions of dollars off of that. But nobody in any government is going to spend any real amount of time on all the work that goes into opening all the data sets on a sustainable, complete basis because some stranger somewhere might get rich.”

Open data’s most basic advantage is that it makes life easier for government workers. Information that’s requested regularly can be put online, freeing workers to do other tasks. At its best, open data uncovers interjurisdictional insights that save money and improve operations. And no matter how tenuous, peripheral bonuses like transparency and economic development are still there too. Governments aren’t gaining the benefits of open data today because there’s not been a rigorous effort to integrate the concept of openness into public-sector work.

One unnamed city that ranks respectably in the U.S. City Open Data Census has more than 1,000 records on its open data portal. But only 132 of those records are data sets and 86 of those data sets are pieces of a single budget that have been split apart. This is a common practice across the public sector and one that reveals intent. For the most part, governments aren’t publishing their data because they know it’s a useful resource that ought to be easily accessible, well curated, neat and current so that it can be used by all. It’s because 1,000 sounds better than 50 when an official is giving a speech or addressing stakeholders, and they’re not the ones who have to use it.

2. CONSUME YOUR OWN OPEN DATA

Governments use data. Open data portals are designed for displaying and sharing information in an organized way. Therefore, governments should use a tool designed for the thing they’re trying to do. Even putting aside the “open” concept, public-sector offices around the nation would benefit hugely from having a common, shared pool of data they can draw upon when they need reliable information. Putting the data online is the most practical way to do that — and it also happens to meet the political dictates of transparency — but government should be doing this for its own sake.

“The most common mistake I see governments make with open data is thinking that publication is the end of the activity, rather than beginning of the activity,” said Dan O’Neil, executive director of the Smart Chicago Collaborative. “Because publishing data can be, if we live in a perfect world, simply a prefatory step to allowing residents to talk about how data affects their lives and helps them live better. But usually, what happens is they publish data and they run as fast as they can in the other direction.”

3. PLAN BEYOND TECHNOLOGY

Open data has outgrown the novelty phase, and that means it needs organizational and policy support to survive. It needs comprehensive planning and believers who will act. People wouldn’t be giving up much if they abandoned open data today, O’Neil said, because open data hasn’t done much. The tragedy of giving up now, he said, would purely be a loss of prospect, because open data could change the world if the focus were shifted away from technology and toward the needs of the people.

An organization called City Bureau is attempting to encourage young non-white people to become reporters in an attempt to restore balance to journalistic coverage on the south and west sides of Chicago. Another journalistic endeavor on Chicago’s South Side called Invisible Institute serves as a watchdog organization that uses investigative reporting, litigation and public discussion to further its civil rights goals. O’Neil’s world is one of civic tech and social justice, but regardless of whether a person supports these particular groups ideologically, everyone can learn from their approach.

“That’s where it’s at,” O’Neil said. “Getting data that isn’t open and making it open and then having an actual community strategy around analyzing not just the data, but the social justice issues around the general milieu.”

Government needs to do the same if open data is to find meaning. Just putting data online and hoping for the best isn’t wrong, but it doesn’t do much. Open data needs a clear plan, and it needs to come from a wide patronage within government.

“The most common mistake is focusing on the project over the practice,” said Will Saunders, Washington state’s open data guy (his actual title). “It’s always attractive to have an executive sponsor, and a lot of times open data projects get started as a transparency commitment, as ‘a hallmark of my administration’ kind of thing. [Sometimes] you wind up having a diligent, small group of folks who facilitate the publication of data and then if there’s a leadership change in three or four years, then a lot of the sustainability just isn’t there.”

4. AUTOMATE SLOWLY

Washington could be publishing three to four times more data than it is today, Saunders said, but the state doesn’t because longevity through automation ensures the efforts will stick.

“Program managers know that they can and should publish, and when they do, they tend to link it to their own programmatic goals as opposed to a specific political commitment,” he said. “What I typically do is work with agencies to see if there’s a way I can encourage them to make publication part of their program design, and if I can’t, then I wait for another day.”

This approach is slower, but like proper diet and exercise, https://dietstrict.com/ experts recommend it because it works.

Open data’s relevance will grow only if efforts mature. In Washington and elsewhere, data sets are often used for purposes different from what was originally intended. Opportunities to repurpose data will appear more frequently as the information becomes better organized, shared and understood. One severe obstacle to that prospect is that today there exist few standard schemas for publishing data. Roads, for instance, cross every boundary the nation has, and yet road data takes a new format in each jurisdiction. Today, without standards, a large project that uses open road data sounds like more trouble than it’s worth.

5. COLLABORATE ON THE CREATION OF PUBLISHING STANDARDS

Government has a hard time following publishing standards today because not many exist. ThePresident’s Task Force on 21st Century Policing is developing some standards for police data, Data.gov is working toward a standard that will let companies like Uber publish their ride data meaningfully, and programs like Bloomberg’s What Works Cities initiative are positioned to develop standards across city lines. Comprehensive and accessible publishing rules would reduce the work required of freeing data sets, and it would solve many of today’s data sharing and comprehension snags.

6. TRUST YOUR EXPERTS

The public isn’t qualified to tell the government how it should be using its data, because the public doesn’t understand government. Most people think “the government” means the president or Congress. No one understands the challenges of government better than those who run it and those are the people who should guide the use of public-sector data.

Utah is growing its open data automation daily under the guidance of experts. The technology office monitors which data sets its offices need and educates stakeholders on how to use that information. The state auditor, the health-care system and external data requesters are among those learning, said Dave Fletcher, Utah’s CTO.

“Increasingly we’re working on an initiative that we’re calling data-driven government to make better decisions based on data,” Fletcher said, adding that they share statewide data with counties so information like graduation rates, unemployment rates, taxes and air quality measures are easily accessed by commissioners.

Drew Mingl, Utah’s open data coordinator, said people are grateful to have a definitive centralized source of state information that can yield new insights. Data now being drawn from the state’s Medicare system, for example, showed a $25,000 deviation in the cost of hip replacement surgery in two neighboring counties.

“People are now making better, more informed decisions because we’ve put all this state data in one place where they can get access to it,” Mingl said.

Los Angeles runs one of the best open data portals in the nation. It ranks first on the U.S. City Open Data Census, with nearly 100 percent of the city’s data open to the public. It’s not perfect, but what it has, it gained through the knowledge of the city’s experienced workers.

Ted Ross, general manager of L.A.’s Information Technology Agency, said the city wanted three things from its portal: a way for average citizens to view data casually, capabilities for data scientists who wanted to do more with the data, like download it or use APIs, and the ability to integrate federated data sets from across systems. Contracting a vendor was the easiest way to reach those goals, Ross said, so rather than develop the portal in-house, that’s what Los Angeles did.

The city listens to the people who use data most to guide its efforts: journalists, researchers, officials and technology staff, Ross said. This feedback ensures the city’s doing more than fulfilling a political mandate, he said.

L.A. has done more with its data than leave it dangling. Vision Zero, a multinational road safety program, promotes roadway design to reduce pedestrian injury and death, and it’s powered by the city’s open data.

“We worked with USC, who volunteered about 25 graduate-level data science students and three professors, and we basically analyzed for causation and commonality, and trends relating to those, and they can help identify some of the high-value networks,” Ross said. “That’s a prime example of taking open data and … using it as a platform to interact with a local university and actually identify information and insight that’s being leveraged to save lives.”

Open data doesn’t need to save lives — and it usually won’t. Its value is in supporting the core functions of government, which are basic things like keeping parks and water clean and trash cans empty, said Josh Baron, applications delivery manager for Ann Arbor, Mich., and that should be the goal of everyone who works in government.

“Our No. 1 job,” Baron said, “is to support the lines of business who are out there making the city a wonderful place to live.”

 

Editor’s note:


Colin Wood. “6 ideas to help government realize open data’s transformative power– GovTech”

GovTech. N.p., Web. 21 Apr. 2016.

Want a Better GDP? Close the Gender-Wage Gap

On April 12, 2016, the United States celebrated Equal Pay Day, a day that symbolizes how far into the year women must work to earn what men earned in the previous year (www.pay-equity.org). Although closing the gender pay gap has made progress over the past few decades since the Equal Pay Act (EPA) was passed in 1963, the United States has still works to achieve gender-wage equality. Not only is equal pay a step forward for women, but studies show it would also be beneficial to the United States’ gross domestic product (GDP) and the economy as a whole.

In a recent report published by the McKinsey Global Institute (MGI), findings show that greater gender parity in the workplace—in terms of pay, hours worked, and access to full-times jobs—the greater the benefit the country’s overall economy (www.govexec.com). The findings in the report strongly recommend that both government and businesses take a more proactive stance in effectuating gender equality. Currently, economists are concerned that as America’s population ages and retires, there will not be enough young workers to take their place, which would have a harmful effect on the economy, as there would be fewer people to provide goods and services, to work and earn wages, as well as lower levels of productivity. Each of these factors would likely culminate in a slowing of GDP growth (www.govexec.com).

In spite of economists’ worries, the GDP will not suffer if employers aim to bridge the pay gap by making more room for women and paying them the same wages as men in the workforce. At present, women work fewer hours, mostly in lower-paying sectors, and have a lower labor force participation rate than men. However, if employers increase women’s labor force participation and assist them with entering and staying in more lucrative and highly-productive jobs, it will be easier to maintain current levels of economic activity and production even as the aging population retires, which will ultimately prevent economic deceleration in the United States.

Although the infographic below was published in 2012, the information is still relevant to the issue of pay disparity in 2016:

The McKinsey report provides number estimates on how the current gender pay gap could be closed. According to the report, if by the year 2025 women are paid equally as men, work the same number of hours as men, and are represented equally in every sector, an additional $4.3 trillion could be added to the United States GDP. This number is 20 percent greater than in a business-as-usual scenario, which does not account for closing the gender pay gap. Since this number is a high estimate given that women’s paid labor would have to precisely echo that of a man’s paid labor, McKinsey researchers also created a more plausible scenario in which each U.S. state matches the level of pay with other states currently making the greatest progress toward gender-wage equality. In this situation, an additional $2.1 trillion could be added to the GDP by 2025.

While Equal Pay Day was established by the National Committee on Pay Equity (NCPE) in 1996 as a public awareness event, women, men and the economy may rather want to make this a celebration of the past the moment gender-wage equality becomes a fact of existence in the United States.

Natural Disaster Crises? Technology May be the Answer

Whether it be a tornado, tsunami, earthquake, monsoon, hurricane, flood, or any other natural phenomena, no one person can be fully prepared for the aftermath of such disasters. Even with around-the-clock efforts from dedicated responders, disaster victims most always outnumber the help that is available to them, fostering a sense of unfair importance for which victims are priority versus those who can hold out just a little longer. Luckily, One Concern, Inc.—a startup that earned a coveted spot on GovTech100, the top 100 companies focused on government customers—aims to be one of the first to utilize artificial intelligence to save lives through analytical disaster assessment and calculated damage estimates.

The idea of One Concern was born from CEO and co-founder, Ahmad Wani, whose hometown of Kashmir, India is located in a region that is especially prone to earthquakes and floods. In 2005, Kashmir was hit by an earthquake that took the lives of 70,000 people—one of two disasters that inspired Wani to pursue his graduate level studies in earthquake engineering research at Stanford University. On another occasion in 2014, a large flood engulfed the state of Kashmir while Wani was visiting his parents—a disaster that left eighty percent of Kashmir underwater in a few short minutes. According to Wani, people had to resort to camping out on their rooftops for up to a week without food and clean water while waiting for uncertain rescue by ad hoc response teams.

The infographic below demonstrates the detrimental impact that various natural disasters have on communities in which they occur:

Although Wani is cognizant that his experiences occurred in a developing country, people in both developing and developed countries experience the same difficulty and chaos in the event of a natural disaster. Wani is trying to us his experiences to solve the problem of post-disaster reconnaissance and rescue through artificial intelligence with the intent of saving lives and strengthening communities. Indeed, by using their core product and web platform, “Seismic Concern,” the company is able to alert those located in jurisdictions affected by an earthquake by displaying a color-coded map of the likely structural damage as well as alerting emergency operation centers, which allows them to allocate their limited resources to rescue and recovery. Seismic Concern not only fosters response prioritization, but also recovery operations such as material staging and shelter management by compiling an Initial Damage Estimate (IDE), which is critical for emergency operation centers to request financial assistance from state and federal level institutions.

Furthermore, One Concern is using state-of-the-art machine learning algorithms, stochastic modeling, training modules, as well as geophysical and seismological research to enable emergency operation centers to train based on actual earthquake simulations before an actual earthquake strikes. According to One Concern, this can aid in personnel readiness and planning development, thus making a community more proactive and resilient.

For now, One Concern is relatively unknown to cities and countries that may be interested in adopting the revolutionary technology in which it specializes. Fortunately, Wani’s company is in the business of being ready and able to respond to anything at any time—an industry that spans the globe. By empowering rescuers and first responders with such valuable resources in times of crisis, they will be equipped with the resources necessary to save lives.

Eye-phone: A technology that powers the blind

In the past, visually impaired people had to shell out thousands of dollars for technology that magnified their computer screens, spoke navigation directions, identified their money and recognized the color of their clothes. Today, users only need smartphones and a handful of apps and accessories to help them get through their physical and online worlds. New software is helping people with limited or no sight navigate around town and across the Internet.

Luis Perez carefully frames his photo to get the best shot for Instagram. Gripping his white cane in one hand and his iPhone in the other, Perez squints at the screen and points the display toward the sunset. His iPhone speaks: “One face. Small face. Face near top left edge.”

Perez snaps several photos and then puts his iPhone back in his pocket, with plans to examine the images later. Taking sunset pictures with an iPhone is nothing remarkable — until you consider that Perez, a 44-year-old who lives in St. Petersburg, Florida, is legally blind. Not being able to clearly see the photos he’s taking doesn’t slow him down. By using technology built into the iPhone, along with apps from the App Store, Perez has developed quite a photography habit.

“My time with vision is limited,” says Perez, who began losing his sight about 15 years ago from retinitis pigmentosa, a genetic eye disease. He now sees only a small circle of what’s directly in front of him, and that will deteriorate over the next few years. “I have to enjoy it as much as I can, and photography is part of that.”

VoiceOver, the screen-reading technology powers this technology.

VoiceOver first turns off the iPhone’s single-tap function on the display. After that, users can move their fingers across the screen to hear what’s on the display. That could be anything from the names of the apps themselves to words in an email, a text message or a social media post. When users turn on the “Speak Hints” function, VoiceOver will say what an app is and then give instructions for using it. Users can even adjust the voice’s speaking rate and pitch.

Lay of the land

By itself, VoiceOver makes it easier for people with limited sight to use their iPhones. But the technology really comes into its own when mobile apps hook into its features. BlindSquare, which talks to users as they walk along crowded city streets and inside busy shopping malls, is a great example.

In addition to VoiceOver, the mobile app taps into the iPhone’s built-in GPS, FourSquare — which knows local landmarks and surrounding areas — and a crowdsourced map of the world. That combination allows BlindSquare to speak names of landmarks, such as cafes, shops and libraries, as the user walks by. Shaking the iPhone prompts BlindSquare to say the current address and nearest intersection. It will even, for example, tell the user that the entry to her destination has “four doors, two of which are automated, and there’s a second set of doors after the vestibule.”

“Twenty years ago, there’s no way we’d be able to walk on our own to find a restaurant,” says Kevin Satizabal, a blind musician and an online communities assistant for the Royal London Society for Blind People. “That’s the great thing about technology. It’s letting people blend in and do everyday tasks with a lot greater ease.”

 

This is the best time in history to be blind. – Luis Perez

Voice Dream reads out text from Web pages, PDFs, PowerPoint presentations and other files. The Be My Eyes app lets blind users video-chat with sighted volunteers for things like distinguishing between two cans of soup. KNFB Reader pulls text from photos taken with the iPhone.

But it’s not just purpose-built apps for the blind that tap into the iPhone’s assistive technology. Many people say some mainstream apps, such as Twitter and Periscope for social media and Uber and Lyft for ride-booking services, have well-designed accessibility, too.

“What I really get excited about are all these mainstream apps,” says Blanks. “That’s what really makes me feel part of society.” Blanks’ sentiment would likely have pleased Apple’s late co-founder, Steve Jobs, who famously said “it just works” when talking about his company’s products.

“We consider accessibility an integral part of what we build into our technology, not an add-on,” says Sarah Herrlinger, Apple’s senior manager for global accessibility policy and initiatives. “It’s a basic human right.”

Almost there

Apple’s device isn’t the only smartphone to have accessibility features. Google’s Android software also has text-to-speech and screen-reading features for phone makers to use. Microsoft, working with Guide Dogs UK, has developed a wearable system that creates a “3D soundscape” similar to BlindSquare.

But not all apps are created equal. Some lose their assistive benefits after being updated. Others add the features as an afterthought, instead of from the get-go.

Lisamaria Martinez, a blind woman who lives in Union City, California, likes a parenting app that explains her baby’s milestones. But the app presents the information in an image of text, not text on its own. That means VoiceOver doesn’t work. To get around it, Martinez takes a screenshot of the images, uses another app to pull the text out of the image and then translates the text into speech.

“It’s super annoying,” says Martinez, who works with Blanks at LightHouse. “The problem is people don’t think about accessibility from the design stage.” That’s what LightHouse and other advocacy groups want to change. “With the right support, we can do a lot of things that people didn’t think we could do,” says Perez, the avid photographer who also teaches people to use technology.

 


Shara Tibken. “Seeing Eye phone: Giving independence to the blind– c|net.”

c|net. N.p., Web. 25 Mar. 2016.