The undercover war on your internet secrets

A black shrouded figure appears on the screen, looming over the rapt audience, talking about surveillance. But this is no Big Brother figure seeking obedience though, rather the opposite.Perhaps even his nemesis.

NSA contractor-turned-whistleblower Edward Snowden is explaining how his former employer and other intelligence agencies have worked to undermine privacy on the internet and beyond.

“We’re seeing systemic attacks on the fabrics of our systems, the fabric of our communications… by undermining the security of our communications, they enable surveillance,” he warns.

He is speaking at the conference via a video link from Russia, where he has taken refuge after leaking the documents detailing some of the NSA’s surveillance projects. The room behind him is in darkness, giving away nothing about his exact location.

“Surveillance is not possible when our movements and communications are safe and protected — a satellite cannot see you when you are inside your home — but an unprotected computer with an open webcam can,” he adds.

Edward Snowden speaking at the CeBIT tech show
Image: Deutsche Messe, Hannover

One of the most significant technologies being targeted by the intelligence services is encryption.

Online, encryption surrounds us, binds us, identifies us. It protects things like our credit card transactions and medical records, encoding them so that — unless you have the key — the data appears to be meaningless nonsense.

Encryption is one of the elemental forces of the web, even though it goes unnoticed and unremarked by the billions of people that use it every day.

But that doesn’t mean that the growth in the use of encryption isn’t controversial.

For some, strong encryption is the cornerstone of security and privacy in any digital communications, whether that’s for your selfies or for campaigners against an autocratic regime.

Others, mostly police and intelligence agencies, have become increasingly worried that the absolute secrecy that encryption provides could make it easier for criminals and terrorists to use the internet to plot without fear of discovery.

As such, the outcome of this war over privacy will have huge implications for the future of the web itself.

The code wars

Codes have been used to protect data in transit for thousands of years, and have long been a key tool in warfare: the Caesar cipher was named after the Roman emperor who used it to protect his military secrets from prying eyes.

These ciphers were extremely basic, of course: the Caesar cipher turned a message into code simply by replacing each letter with the one three down in the alphabet, so that ‘a’ became ‘d’.

Ciphers became more sophisticated, and harder to break, over the centuries, but it was the Second World War that demonstrated the real importance of encryption — and cracking it. The work done at Bletchley Park to crack German codes including Enigma had a famous impact on the course of the war.

As a result, once the war was over, encryption technology was put on the US Munitions List alongside tanks and guns as an ‘auxiliary military technology’, which put restrictions on its export.

The real fundamental problem is the internet and the protocol it’s all based on was never intended to be secure.” - ALAN WOODWARD, SURREY UNIVERSITY

In practice, these government controls didn’t make much difference to ordinary people, as there were few uses for code-making — that is, encryption — outside the military.

But all that changed with the arrival of the personal computer. It became an even bigger issue as the huge economic potential of the web became apparent.

“The internet and the protocol it’s all based on was never intended to be secure, so if we are going to rely on the internet as part of our critical national [and] international infrastructure, which we do, you’ve got to be able to secure it, and the only way to do that is to layer encryption over the top,” explains Professor Alan Woodward, a computer security expert at the University of Surrey.

Few would be willing to use online shopping if their credit card details, address, and what they were buying was being sent across the internet for any to see.

Encryption provides privacy by encoding data onto what appears to be meaningless junk, and it also creates trust by allowing us to prove who we are online — another essential element of doing business over the internet.

“A lot of cryptography isn’t just about keeping things secret, a lot of it is about proving identity,” says Bill Buchanan, professor of computing at Edinburgh Napier University. “There’s a lot of naïveté about cryptography as to thinking it’s just about keeping something safe on your disk.”

But the rise of the internet suddenly meant that access to cryptography became an issue of privacy and economics as well as one of national security, immediately sparking the clash that came to be known as ‘the crypto wars’.

Governments fought to control the use of encryption, while privacy advocates insisted its use was essential — not just for individual freedom, but also to protect the commercial development of the nascent internet.

What followed was a series of skirmishes, as the US government and others made increasingly desperate — and unsuccessful — efforts to reassert control over encryption technologies. One example in the mid-90s involved the NSA designing the Clipper chip, which was a way to give the agency access to the communications on any devices on which the chip was installed.

Another attempt at government control during this period came with the introduction of key escrow. Under the scheme, the US government would agree to license encryption providers, if they gave the state access to the keys used to decode communications.

On top of this were rules which only allowed products that used weak and easily-cracked encryption to be exported from the US.

Remarkably there was an unwelcome reminder of those days of watered-down encryption with the appearance of the recent FREAK flaw in the SSL security standard. The vulnerability could be used to force web browsers to default to the weaker “export-strength” encryption, which can be easily broken.

Few experts even knew that the option to use the weaker encryption still existed in the browsers commonly used today — a good example of the dangerous and unexpected consequences of attempts to control privacy technologies, long after the political decisions affecting it had been reversed and forgotten.

But by the early 2000s, it appeared that the privacy advocates had effectively won the crypto wars. The Clipper chip was abandoned, strong encryption software exports were allowed, key escrow failed, and governments realised it was all but impossible for them to control the use of encryption. It was understood that if they tried, the damage they would do to the internet economy would be too great.

Individual freedoms, and simple economics, had overwhelmed national security. In 2005, one campaigning group even cheerfully announced “The crypto wars are finally over and we won!”

They were wrong.

We now know that the crypto wars were never over. While privacy campaigners celebrated their victory, intelligence agencies were already at work breaking and undermining encryption. The second stage of the crypto wars — the spies’ secret war — had begun.

 

Editor’s note:


Steve Ranger. “The undercover war on your internet secrets: How online surveillance cracked our trust in the web– TechRepublic”

TechRepublic. N.p., Web. 26 May. 2016.

Are we safe?

Hack the Pentagon Program

Hackers found about 90 vulnerabilities in the Defense Department’s public websites as part of a highly touted bug bounty program, officials say. Those vulnerabilities included the ability to manipulate website content, “but nothing that was… earth-shattering” and worth shuttering the program over, according to Corey Harrison, a member of the department’s Defense Digital Service.

The two-week bounty program, which Defense Secretary Ash Carter announced in Silicon Valley in March, wrapped up last week and could be a springboard for similar programs across federal government.

DDS is made up of about 15 entrepreneurs and tech hands who are trying to get the defense bureaucracy to apply a startup mentality to specific projects. A sign hanging in their office reads: “Get shit done,” Harrison said. He described an informal atmosphere in which the team is free to experiment with new tools such as the messaging application Slack. But his team’s tinkering is in some respects a world apart from DOD programming. If the broader department were to use Slack, for example, lawyers would have to make sure the application complies with Freedom of Information Act regulations.

Even the name of the bug bounty program, Hack the Pentagon, was initially controversial. “They told us the name was a non-starter, which is awesome,” Harrison said. “That’s a great place to start.”

Harrison described overwhelming interest in the program — organizers expected a couple hundred hackers to register, but ultimately there were 1,400.

Corporate bug bounty programs can be lucrative for hackers. Yahoo for example, has paid security researchers $1.6 million since 2013 for bugs, including up to $15,000 per discovery, Christian Science Monitor’s Passcode reported.

That will be the maximum possible bug bounty in the Pentagon’s pilot project, too.  An estimated $75,000 total is available to pay hackers participating in the DOD program, he said, and officials are still parsing the program data to determine allotted payments. Yet some IT security experts have been critical of the DOD program. Robert Graham, a cybersecurity inventor and blogger, has asserted that DOD’s overtures to hackers have been undercut by the department’s discouragement of researchers from conducting their own scans of DOD assets.

“More than 250 million email accounts breached” – but how bad is it really?

Reuters just broke a story about a password breach said to affect more than 250 million webmail accounts around the world. The claims come from an American cyberinvestigation company that has reported on giant data breaches before: Hold Security.

The company’s founder, Alex Holden, reportedly told Reuters that: “The discovery of 272.3 million stolen accounts included a majority of users of Mail.ru, Russia’s most popular email service, and smaller fractions of Google, Yahoo and Microsoft email users.”

The database supposedly contained “credentials,” or what Reuters referred to as “usernames and passwords,” implying that the breached data might very well let crooks right into the affected accounts without further hacking or cracking.

Stolen email accounts are extremely useful to cyber-criminals. For example, they can read your messages before you do, putting them in a powerful position to scam your friends, family, debtors or creditors out of money by giving believable instructions to redirect payments to bogus bank accounts. They can learn a raft of important personal details about your life, making it much easier for them to defraud you by taking out loans in your name. Worst of all, they may be able to trigger password resets on your other online accounts, intercept the emails that come back, and take over those accounts as well.

How bad is it?

Unfortunately, we can’t yet tell you how serious this alleged breach really is. The good news, straight off the bat, is that the figure of “272.3 million stolen accounts” is some three or four times bigger than reality. Many of the accounts were repeated several times in the database, with Holden admitting that, after de-duplication, only 57,000,000 Mail.ru accounts remained, plus “tens of millions of credentials” for Google, Yahoo and Microsoft accounts.

More good news is that if the stolen data really does include the actual passwords used by the account holders, it’s highly unlikely – in fact, it’s as good as impossible – that the database came from security breaches at any of the webmail providers listed. Properly-run web services never store your actual password, because they don’t need to; instead, they store a cryptographic value known as a hash that can be computed from your password.

The idea is that if even if crooks manage to steal the whole password database, they can’t just read the passwords out of it.Instead, they have to guess repeatedly at each password, and compute the hash of each guess in turn, until they get a match.

Poorly chosen passwords can still be cracked, because the crooks try the most likely guesses first. But a reasonably complex password (something along the lines of IByoU/nvr/GE55, short for I bet you never guess) will take so long to turn up in the criminals’ “guess list” that it becomes as good as uncrackable, especially if you change your password soon after hearing about a breach. If the passwords in this case are real, it seems likely that they were stolen directly from users as they typed them in, for example by means of malware known as a keylogger that covertly keeps track of your keystrokes.

The Linkedin Chaos

Millions of LinkedIn passwords up for sale on the dark web.

Did you change your LinkedIn password after that massive 2012 leak of millions of passwords, which were subsequently posted online and cracked within hours? If not, you better hop to it, most particularly if you reuse passwords on other sites (and please tell us you don’t)

The news isn’t good: first off, what was initially thought to be a “massive” breach turns out to have been more like a massive breach that’s mainlining steroids. At the time of the breach 4 years ago, “only” 6.5 million encrypted (but not salted!) passwords had been posted online. But now, there are a way-more-whopping 117 million LinkedIn account emails and passwords up for sale.

As Motherboard reports, somebody going by the name of “Peace” says the data was stolen during the 2012 breach. LinkedIn never did spell out exactly how many users were affected by that breach. In fact, LinkedIn spokesperson Hani Durzy told Motherboard that the company doesn’t actually know how many accounts were involved. Regardless, it appears that it’s far worse than anybody thought. Motherboard said that the stolen data’s up for sale on one site and in the possession of another.

The first is a dark web marketplace called The Real Deal that’s said to sell not only drugs and digital goods such as credit cards, but also hacking tools such as zero days and other exploits. Peace has listed some 167 million LinkedIn accounts on that marketplace with an asking price of 5 bitcoin, or around $2,200. The second place that apparently has the data is LeakedSource, a subscription-based search tool that lets people search for their leaked data. LeakedSource says it has 167,370,910 LinkedIn emails and passwords. Out of those 167 million accounts, 117 million have both emails and encrypted passwords, according to Motherboard.
Cialis from https://cialdoc.com/ is a great med! 5 years ago I had a girlfriend that I had to work to get her to the top )) The drug helped, it really works for 36 hours .. I was stunned!

A LeakedSource operator told Motherboard’s Lorenzo Franceschi-Bicchierai that so far, they’d cracked “90% of the passwords in 72 hours.” As far as verification goes, LinkedIn confirmed that the data’s legitimate.

On Wednesday, LinkedIn’s chief information security officer Cory Scott posted this blog post about the logins now up for sale:

“Yesterday, we became aware of an additional set of data that had just been released that claims to be email and hashed password combinations of more than 100 million LinkedIn members from that same theft in 2012. We are taking immediate steps to invalidate the passwords of the accounts impacted, and we will contact those members to reset their passwords. We have no indication that this is as a result of a new security breach.”

Federal Agencies Hope to Bid Farewell to Conventional Passwords

No matter how clever and well-constructed your current passwords may be, they may become obsolete under new guidance for federal system authentication. Indeed, in a recent GitHub public preview document, the National Institute of Standards and Technology (NIST) says it will offer dramatic changes to its guidelines for federal agencies’ digital authentication methods.

In its new approach, NIST is transforming its current approach to identity-proofing to best suit the current Office of Management and Budget (OPM) guidance by helping agencies choose the most ultraprecise digital authentication technologies. This approach includes differentiating individual components of identity verification into inconspicuous, component elements. Using NIST’s process, individuals would establish their identity through what is called identity assurance and validate their credentials to gain entry into a given system through authenticator assurance—possibly a chip card or encrypted identity card (www.FCW.com).

Furthermore, to ensure absolute security, the document states that passwords could become entirely numeric as security experts believe that combining digits, letters and symbols in conventional passwords has thus far proved insignificant in protecting user information despite the impact on usability and memorability. Contrastingly, the NIST advises that passwords be tested against a list of unacceptable passwords. Unacceptable passwords are identified as those used in previous breaches, dictionary words, specific words, and specific names that users are most like to choose.

To further guarantee security and protection, users will not be able to have a password “hint” that is ultimately accessible to unauthenticated personnel. In other words, the familiar “first elementary school” or “name of first pet” password prompt will cease to exist.

Although these changes to password security will take place among federal agencies, many Americans will not have this level of user authentication. Thus, the infographic below includes a variety of useful tips and instruction on how to create a breach-proof password:

According to the NIST, these technologically advanced guidelines for password security and user authentication “should have a tested equal error rate of 1 in 1,000 or better, with a false-match rate of 1 in 1,000 or better” (www.FCW.com). When the NIST implements these new guidelines, federal government user data will not only have a greater level of security, it will also offer unprecedented protection to national confidential data from malicious data breaches, hackers, and cyber-attacks.

Road to Super Intelligence

Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die!

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced.

“We are on the edge of change comparable to the rise of human life on Earth” — Vernor Vinge

There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. For example, AI will create new jobs while phasing out some old ones—magnifying the importance of programs like TechHire that are preparing our workforce with the skills to get ahead in today’s economy, and tomorrow’s. AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery—adding to the challenge of predicting and controlling how complex technologies will behave.

There are tremendous opportunities and an array of considerations across the Federal Government in privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.

That is why the White House Office of Science and Technology Policy announced public workshops over the coming months on topics in AI to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology.

The Federal Government also is working to leverage AI for public good and toward a more effective government. A new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence will meet for the first time next week. This group will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

Broadly, between now and the end of the Administration, the NSTC group will work to increase the use of AI and machine learning to improve the delivery of government services. Such efforts may include empowering Federal departments and agencies to run pilot projects evaluating new AI-driven approaches and government investment in research on how to use AI to make government services more effective. Applications in AI to areas of government that are not traditionally technology-focused are especially significant; there is tremendous potential in AI-driven improvements to programs and delivery of services that help make everyday life better for Americans in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, the environment, and much more.

 

Editor’s note: Ideas inspired from,


Ed Felten. “Preparing for the future of Artificial Intelligence– WhiteHouse”

WhiteHouse.gov. N.p., Web. 5 May. 2016.