Google’s plan for computer supremacy

 

The field of quantum computing is undergoing a rapid shake-up, and engineers at Google have quietly set out a plan to dominate

SOMEWHERE in California, Google is building a device that will usher in a new era for computing. It’s a quantum computer, the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers.

The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first.

“They are definitely the world leaders now, there is no doubt about it,” says Simon Devitt at the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

We have had a glimpse of Google’s intentions. Last month, its engineers quietly published a paper detailing their plans (arxiv.org/abs/1608.00263). Their goal, audaciously named quantum supremacy, is to build the first quantum computer capable of performing a task no classical computer can.

“It’s a blueprint for what they’re planning to do in the next couple of years,” says Scott Aaronson at the University of Texas at Austin, who has discussed the plans with the team.

So how will they do it? Quantum computers process data as quantum bits, or qubits. Unlike classical bits, these can store a mixture of both 0 and 1 at the same time, thanks to the principle of quantum superposition. It’s this potential that gives quantum computers the edge at certain problems, like factoring large numbers. But ordinary computers are also pretty good at such tasks. Showing quantum computers are better would require thousands of qubits, which is far beyond our current technical ability.

Instead, Google wants to claim the prize with just 50 qubits. That’s still an ambitious goal – publicly, they have only announced a 9-qubit computer – but one within reach.

“It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong”

To help it succeed, Google has brought the fight to quantum’s home turf. It is focusing on a problem that is fiendishly difficult for ordinary computers but that a quantum computer will do naturally: simulating the behavior of a random arrangement of quantum circuits.

Any small variation in the input into those quantum circuits can produce a massively different output, so it’s difficult for the classical computer to cheat with approximations to simplify the problem. “They’re doing a quantum version of chaos,” says Devitt. “The output is essentially random, so you have to compute everything.”

To push classical computing to the limit, Google turned to Edison, one of the most advanced supercomputers in the world, housed at the US National Energy Research Scientific Computing Center. Google had it simulate the behaviour of quantum circuits on increasingly larger grids of qubits, up to a 6 × 7 grid of 42 qubits.

This computation is difficult because as the grid size increases, the amount of memory needed to store everything balloons rapidly. A 6 × 4 grid needed just 268 megabytes, less than found in your average smartphone. The 6 × 7 grid demanded 70 terabytes, roughly 10,000 times that of a high-end PC.

Google stopped there because going to the next size up is currently impossible: a 48-qubit grid would require 2.252 petabytes of memory, almost double that of the top supercomputer in the world. If Google can solve the problem with a 50-qubit quantum computer, it will have beaten every other computer in existence.

Eyes on the prize

By setting out this clear test, Google hopes to avoid the problems that have plagued previous claims of quantum computers outperforming ordinary ones – including some made by Google.

Last year, the firm announced it had solved certain problems 100 million times faster than a classical computer by using a D-Wave quantum computer, a commercially available device with a controversial history. Experts immediately dismissed the results, saying they weren’t a fair comparison.

Google purchased its D-Wave computer in 2013 to figure out whether it could be used to improve search results and artificial intelligence. The following year, the firm hired John Martinis at the University of California, Santa Barbara, to design its own superconducting qubits. “His qubits are way higher quality,” says Aaronson.

It’s Martinis and colleagues who are now attempting to achieve quantum supremacy with 50 qubits, and many believe they will get there soon. “I think this is achievable within two or three years,” says Matthias Troyer at the Swiss Federal Institute of Technology in Zurich. “They’ve showed concrete steps on how they will do it.”

Martinis and colleagues have discussed a number of timelines for reaching this milestone, says Devitt. The earliest is by the end of this year, but that is unlikely. “I’m going to be optimistic and say maybe at the end of next year,” he says. “If they get it done even within the next five years, that will be a tremendous leap forward.”

The first successful quantum supremacy experiment won’t give us computers capable of solving any problem imaginable – based on current theory, those will need to be much larger machines. But having a working, small computer could drive innovation, or augment existing computers, making it the start of a new era.

Aaronson compares it to the first self-sustaining nuclear reaction, achieved by the Manhattan project in Chicago in 1942. “It might be a thing that causes people to say, if we want a full-scalable quantum computer, let’s talk numbers: how many billions of dollars?” he says.

Solving the challenges of building a 50-qubit device will prepare Google to construct something bigger. “It’s absolutely progress to building a fully scalable machine,” says Ian Walmsley at the University of Oxford.

For quantum computers to be truly useful in the long run, we will also need robust quantum error correction, a technique to mitigate the fragility of quantum states. Martinis and others are already working on this, but it will take longer than achieving quantum supremacy.

Still, achieving supremacy won’t be dismissed.

“Once a system hits quantum supremacy and is showing clear scale-up behaviour, it will be a flare in the sky to the private sector,” says Devitt. “It’s ready to move out of the labs.”

“The field is moving much faster than expected,” says Troyer. “It’s time to move quantum computing from science to engineering and really build devices.”

 

Editor’s note: Original Source ‘NewScientist’

This article appeared in print under the headline “Google plans quantum supremacy”


Jacob Aron. “Revealed: Google’s plan for quantum computer supremacy”

NewScientist. N.p., Web. 31 August. 2016.

Proliferated growth in Machine learning counterparts challenges Silicon Technology.

The rise of artificial intelligence and impending end of Moore’s law means silicon chips are nearing the end of the line. Here are some alternatives.

SILICON has been making our computers work for almost half a century. Whether designed for graphics or number crunching, all information processing is done using a million-strong horde of tiny logic gates made from element number 14.

But silicon’s time may soon be up. Moore’s law – the prophecy which dictates that the number of silicon transistors on microprocessors doubles every two years – is grinding to a halt because there is a limit to how many can be squeezed on a chip.

The machine-learning boom is another problem. The amount of energy silicon-based computers use is set to soar as they crunch more of the massive data sets that algorithms in this field require. The Semiconductor Industry Association estimates that, on current trends, computing’s energy demands will outstrip the world’s total energy supply by 2040.

So research groups all over the world are building alternative systems that can handle large amounts of data without using silicon. All of them strive to be smaller and more power efficient than existing chips.

Unstable computing

Julie Grollier leads a group at the UMPhy lab near Paris that looks at how nanodevices can be engineered to work more like the human brain. Her team uses tiny magnetic particles for computation, specifically pattern recognition.

When magnetic particles are really small they become unstable and their magnetic fields start to oscillate wildly. By applying a current, the team has harnessed these oscillations to do basic computations. Scaled up, Grollier believes the technology could recognize patterns far faster than existing techniques.

It would also be less power-hungry. The magnetic auto-oscillators Grollier works with could use 100 times less power than their silicon counterparts. They can be 10,000 times smaller too.

Igor Carron, who launched Paris-based start-up LightOn in December, has another alternative to silicon chips: light.

Carron won’t say too much about how his planned LightOn computers will work, but they will have an optical system that processes bulky and unwieldy data sets so machine learning algorithms can deal with them more easily. It does this using a mathematical technique called random projection. This method has been known about since 1984, but has always involved too many computations for silicon chips to handle. Now, Carron and his colleagues are working on a way to do the whole operation with light.

“On current trends, computing’s energy demands could outstrip total supply by 2040“

What will these new ways of processing and learning from data make possible? Carron thinks machines that can learn without needing bulky processors will allow wearable computing to take off. They could also make the emerging “internet of things” – where computers are built into ordinary objects – far more powerful. These objects would no longer need to funnel data back and forth to data centres for processing. Instead, they will be able to do it on the spot.

Devices such as Grollier’s and Carron’s aren’t the only ones taking an alternative approach to computation. A group at Stanford University in California has built a chip containing 178 transistors out of carbon nanotubes, whose electrical properties make them more efficient switches than silicon transistors. And earlier this year, researchers at Ben-Gurion University in Israel and the Georgia Institute of Technology used DNA to build the world’s smallest diode, an electronic component used in computers.

For the time being, high-power silicon computers that handle massive amounts of data are still making huge gains in machine learning. But that exponential growth cannot continue forever. To really tap into and learn from all the world’s data, we will need learning machines in every pocket. Companies such as Facebook and Google are barely scratching the surface. “There’s a huge haul of data banging on their door without them being able to make sense of it,” says Carron.

 

Editor’s note: Original Source: ‘NewScientist’

This article appeared in print under the headline “Making light work of AI”


Hal Hodson. “Move over silicon: Machine learning boom means we need new chips”

NewScientist. N.p., Web. 24 August. 2016.

Cybersecurity as chess match: A new approach for governments

Cyber threats are growing in volume, intensity, and sophistication, and they aren’t going away—ever. And recent failures call into question the effectiveness of the billions already sunk into cybersecurity.

How can government agencies reverse the growing gap between security investment and effectiveness? Traditionally, cybersecurity has focused on preventing intrusions, defending firewalls, monitoring ports, and the like. The evolving threat landscape, however, calls for a more dynamic approach.

Whether it’s an inside or external threat, organizations are finding that building firewalls is less effective than anticipating the nature of threats—studying malware in the wild, before it exploits a vulnerability.

The evolving nature of cyber threats calls for a collaborative, networked defense, which means sharing information about vulnerabilities, threats, and remedies among a community of governments, companies, and security vendors. Promoting this kind of exchange between the public and private sectors was a key aim of the US Cyber Security Act of 2012.

Australia has taken a significant lead in working across government and the private sector to shore up collective defenses. The Australian Cyber Security Centre (ACSC) plays many roles, raising awareness of cybersecurity, reporting on the nature and extent of cyber threats, encouraging reporting of incidents, analyzing and investigating specific threats, coordinating national security operations, and heading up the Australian government’s response to hacking incidents. At its core, it’s a hub for information exchange: Private companies, state and territorial governments, and international partners all share discoveries at the ACSC.

The Australian approach begins with good network hygiene: blocking unknown executable files, automatically installing software updates and security patches on all computers, and restricting administrative privileges.

The program then aims to assess adversaries, combining threat data from multiple entities to strengthen collective intelligence. The system uploads results of intrusion attempts to the cloud, giving analysts from multiple agencies a larger pool of attack data to scan for patterns.

Cybersecurity experts have long valued collective intelligence, perhaps first during the 2001 fight against the Li0n worm, which exploited a vulnerability in computer connections.[i] A few analysts noticed a spike in probes to port 53, which supports the Domain Name Service, the system for naming computers and network servers organized around domains. They warned international colleagues, who collaborated on a response. Soon, a system administrator in the Netherlands collected a sample of the worm, which allowed other experts to examine it in a protected testing environment, a “sandbox.” A global community of security practitioners then identified the worm’s mechanism and built a program to detect infections. Within 14 hours, they had publicized their findings widely enough to defend computers worldwide.

A third core security principle is to rethink network security. All too often, leaders think of it as a wall. But a Great Wall can be scaled—a Maginot Line can be avoided. Fixed obstacles are fixed targets, and that’s not optimal cyber defense. Think of cybersecurity like a chess match: Governments need to deploy their advantages and strengths against their opponents’ disadvantages and weaknesses.

Perpetual unpredictability is the best defense. Keep moving. Keep changing. No sitting; no stopping. Plant fake information. Deploy “honeypots” (decoy servers or systems). Move data around. If criminals get in, flood them with bad information

The goal is to modify the defenses so fast that hackers waste money and time probing systems that have already changed. Savvy cybersecurity pros understand this: The more you change the game, the more your opponents’ costs go up, and the more your costs go down. Maybe they’ll move on to an easier target.

Agencies need to learn to love continuous change. New problems will arise. There’ll always be work.

This challenge for governments resembles that facing military strategists as their primary roles shift from war against established nations to continual skirmishes against elusive, unpredictable non-state actors. Your government will inevitably lose some cybersecurity skirmishes, but that doesn’t mean it’s failed. It’s a given that not every encounter will end in victory.

The important test lies in how government officials anticipate and counter moves by an ever-shifting cast of criminal adversaries.

Digital governments will need speed, dexterity, and adaptability to succeed on this new battlefield.

 

Editor’s note: Original Source: ‘Washington Technology’


William D. Eggers. “Cybersecurity as chess match: A new approach for governments”

Washington Technology. N.p., Web. 12 August. 2016.

The rising tide of zero-code development

In 2016, government systems integrators continue to battle a wide-range of margin-squeezing challenges that stem from decreased federal spending.

They are tasked with developing demanding next-generation solutions in the mobile, big data and cloud computing areas.  However, it is often difficult to deliver acceptable technology solutions within budget.

The core issue is that it involves a significant investment to develop customized solutions and systems tailored to unique program requirements.  Systems integrators and their customers need to employ technical advantages that enable them to solve problems and implement field advanced technology at a similar or lower level of effort.

Fortunately, the pace of commercial innovation is such that opportunities exist for systems integrators that were not even options in the very recent past.  They can now can leverage tools such as automated application factories that produce customizable mobile applications for a fraction of the investment in coding and development required in the past.

In fact, these low-code and zero-code solutions allow companies to rapidly build and deploy fully customized applications that are tailored to meet the unique business and workflow requirements of government. End users, without software or engineering training, can literally create mobile apps with custom forms, maps and features – all from a simple, graphical interface.

This is not just a modest improvement of the status quo; rather it is a completely disruptive innovation that dramatically lowers the cost of fielding high-end, tailored software solutions.

Enterprises can now build apps without requiring the expertise, expense and ongoing maintenance of commercial software.  Also, for service providers, it is possible to develop and private-label these apps in ways that demonstrate premium brand value without investing in mobile app development services or staff.

And, the government customer wins.

Government IT continues to face budget scrutiny at a time when their innovations are most needed for mission success. These new zero code applications allow the customer to rapidly build iOS, Android and web apps that are fully-customized to meet any need.

Zero code apps go beyond the “low code” platforms, which are becoming more common in the corporate enterprise space – especially for business process management (BPM) solutions. The challenge with these “low-code” applications is that they still require a level of software and engineering expertise to enable “citizen developers.”  Conversely, zero code applications literally do not require any coding and can be built by end users.

Of course, there will always be situations where more complex capabilities are required and extend outside the existing feature set available from zero code platforms.  But now for time being, we have limited the scope of systems integration and isolated engineering effort (man-hours and budget) to only those areas.   Further, as these new zero code apps continue to expand the catalog of available features, the adaptation and customization costs will continue to shrink.

Ultimately, by offering these types of zero-code applications as part of technical solutions, we can help the government customer and the system integrator.  Government stakeholders and end users get the fully-customized application they need. The IT department and the systems integrator become heroes, delivering solutions at a fraction of the cost of traditional software development.

In the end, everyone truly wins.

 

Editor’s note: Original Source: ‘Washington Technology’


John Timar. “Get ready for the rising tide of zero-code development”

Washington Technology. N.p., Web. 4 August. 2016.

US Military have introduced its very own unmanned submarine hunter

Image Credits: DARPA

We all are aware of what submarines are capable of. In the past submarines were the biggest factors which shaped-up the war. Now with technological advancements, the US Military have introduced its very own unmanned submarine hunter. The ocean’s newest predator, a robotic ship designed to help the U.S. military hunt enemy submarines, has completed its first tests at sea.

Called the “Sea Hunter,” the 132-foot (40 meters) unmanned vessel is still getting its figurative sea legs, but the performance tests off the coast of San Diego have steered the project on a course to enter the U.S. Navy’s fleet by 2018, according to the Defense Advanced Research Projects Agency (DARPA), the branch of the U.S. Department of Defense responsible for developing new technologies for the military.

The Sea Hunter “surpassed all performance objectives for speed, maneuverability, stability, sea-keeping, acceleration/deceleration and fuel consumption,” representatives from Leidos, the company developing the Sea Hunter, said in a statement.

The autonomous submarine-hunting ship was christened in April, and is part of a DARPA initiative to expand the use of artificial intelligence in the military. The drone ship’s mission will be to seek out and neutralize enemy submarines, according to the agency.

Initial tests required a pilot on the ship, but the Sea Hunter is designed for autonomous missions.

“When the Sea Hunter is fully operational, it will be able to stay at sea for three months with no crew and very little remote control, which can be done from thousands of miles away,” Leidos officials said in the statement.

Advanced artificial intelligence software will continuously navigate the Sea Hunter safely around other ships and in rough waters, according to DARPA. The technology also allows for remote guidance if a specific mission requires it.

“It will still be sailors who are deciding how, when and where to use this new capability and the technology that has made it possible,” Scott Littlefield, DARPA program manager, said in a statement when the Sea Hunter was christened.

The Sea Hunter still faces a two-year test program, co-sponsored by DARPA and the Office of Naval Research. Leidos said upcoming tests will include assessments of the ship’s sensors, the vessel’s autonomous controls and more.

Other DARPA projects being driven by AI include a potential robot battlefield manager that helps decide the next move in a space war, and an AI technology that could decode enemy messages during air reconnaissance missions.

The world’s first unmanned ship completed its first performance tests, and is set to join the US Navy in 2018 to hunt enemy submarines lurking in the deep.

 

Editor’s note: Original Source: ‘Live Science’

Image Credits: DARPA


Kacey Deamer. “US Military’s Robotic Submarine Hunter Completes First Tests at Sea”

Live Science. N.p., Web. 4 August. 2016.

Tiny ‘Atomic Memory’ Device Could Store All Books Ever Written

A new “atomic memory” device that encodes data atom by atom can store hundreds of times more data than current hard disks can, a new study finds.

You would need just the area of a postage stamp to write out all books ever written,” said study senior author Sander Otte, a physicist at the Delft University of Technology’s Kavli Institute of Nanoscience in the Netherlands.

In fact, the researchers estimated that if they created a cube 100 microns wide — about the same diameter as the average human hair — made of sheets of atomic memory separated from one another by 5 nanometers, or billionths of a meter, the cube could easily store the contents of the entire U.S. Library of Congress.

As the world generates more data, researchers are seeking ways to store all of that information in as little space as possible. The new atomic memory devices that researchers developed can store more than 500 trillion bits of data per square inch (6.45 square centimeters) — about 500 times more data than the best commercial hard disk currently available, according to the scientists who created the new devices.

The scientists created their atomic memory device using a scanning tunneling microscope, which uses an extremely sharp needle to scan over surfaces just as a blind person would run his or her fingers over a page of braille to read it. Scanning tunneling microscope probes can not only detect atoms, but also nudge them around.

Computers represent data as 1s and 0s — binary digits known as bits that they express by flicking tiny, switch-like transistors either on or off. The new atomic memory device represents each bit as two possible locations on a copper surface; a chlorine atom can slide back and forth between these two positions, the researchers explained.

“If the chlorine atom is in the top position, there is a hole beneath it — we call this a 1,” Otte said in a statement. “If the hole is in the top position and the chlorine atom is therefore on the bottom, then the bit is a 0.” (Each square hole is about 25 picometers, or trillionths of a meter, deep.)

The bits are separated from one another by rows of other chlorine atoms. These rows could keep the bits in place for more than 40 hours, the scientists found. This system of packing atoms together is far more stable and reliable than atomic memory strategies that employ loose atoms, the researchers said.

These atoms were organized into 127 blocks of 64 bits. Each block was labeled with a marker of holes. These markers are similar to the QR codes now often used in ads and tickets. These markers can label the precise location of each block on the copper surface.

The markers can also label a block as damaged; perhaps this damage was caused by some contaminant or flaw in the copper surface — about 12 percent of blocks are not suitable for data storage because of such problems, according to the researchers. All in all, this orderly system of markers could help atomic memory scale up to very large sizes, even if the copper surface the data is encoded on is not entirely perfect, they said.

All in all, the scientists noted that this proof-of-principle device significantly outperforms current state-of-the-art hard drives in terms of storage capacity.

As impressive as creating atomic memory devices is, Otte said that for him, “The most important implication is not at all the data storage itself.”

Instead, for Otte, atomic memory simply demonstrates how well scientists can now engineer devices on the level of atoms. “I cannot, at this point, foresee where this will lead, but I am convinced that it will be much more exciting than just data storage,” Otte said.

“Just stop and think for a moment how far we got as humans that we can now engineer things with this amazing level of precision, and wonder about the possibilities that it may give,” Otte said.

Reading a block of bits currently takes about 1 minute, and rewriting a block of bits currently requires about 2 minutes, the researchers said. However, they noted that it’s possible to speed up this system by making probes move faster over the surfaces of these atomic memory devices, potentially for read-and-write speeds on the order of 1 million bits per second.

Still, the researchers cautioned that atomic memory will not record data in large-scale data centers anytime soon. Currently, these atomic memory devices only work in very clean vacuum environments where they cannot become contaminated, and require cooling by liquid nitrogen to supercold temperatures of minus 321 degrees Fahrenheit (minus 196 degrees Celsius, or 77 kelvins) to prevent the chlorine atoms from jittering around.

Still, such temperatures are “easier to obtain than you may think,” Otte said. “Many MRI scanners in hospitals are already kept at 4 kelvins (minus 452 degrees Fahrenheit, or minus 269 degrees Celsius) permanently, so it is not at all inconceivable that future storage facilities in data centers could be maintained at [liquid nitrogen temperatures].”

Future research will investigate different combinations of materials that may help atomic memory’s “stability at higher temperatures, perhaps even room temperature,” Otte said.

The scientists detailed their findings online on July 18th in the journal Nature Nanotechnology.

 

Editor’s note: Original Source: ‘Live Science’


Charles Q. Choi. “Tiny ‘Atomic Memory’ Device Could Store All Books Ever Written”

Live Science. N.p., Web. 28 July. 2016.

The internet decides it’s own existance

The Internet is a busy place. Every second, approximately 6,000 tweets are tweeted; more than 40,000 Google queries are searched; and more than 2 million emails are sent, according to Internet Live Stats, a website of the international Real Time Statistics Project.

But these statistics only hint at the size of the Web. As of September 2014, there were 1 billion websites on the Internet, a number that fluctuates by the minute as sites go defunct and others are born. And beneath this constantly changing (but sort of quantifiable) Internet that’s familiar to most people lies the “Deep Web,” which includes things Google and other search engines don’t index. Deep Web content can be as innocuous as the results of a search of an online database or as secretive as black-market forums accessible only to those with special Tor software. (Though Tor isn’t only for illegal activity, it’s used wherever people might have reason to go anonymous online.)

Combine the constant change in the “surface” Web with the unquantifiability of the Deep Web, and it’s easy to see why estimating the size of the Internet is a difficult task. However, analysts say the Web is big and getting bigger.

Data-driven

With about 1 billion websites, the Web is home to many more individual Web pages. One of these pages, www.worldwidewebsize.com, seeks to quantify the number using research by Internet consultant Maurice de Kunder. De Kunder and his colleagues published their methodology in February 2016 in the journal Scientometrics. To come to an estimate, the researchers sent a batch of 50 common words to be searched by Google and Bing. The researchers knew how frequently these words have appeared in print in general, allowing them to extrapolate the total number of pages out there based on how many contain the reference words. Search engines overlap in the pages they index, so the method also requires estimating and subtracting the likely overlap.

According to these calculations, there were at least 4.66 billion Web pages online as of mid-March 2016. This calculation covers only the searchable Web, however, not the Deep Web.

So how much information does the Internet hold? There are three ways to look at that question, said Martin Hilbert, a professor of communications at the University of California, Davis.

“The Internet stores information, the Internet communicates information and the Internet computes information,” Hilbert told Live Science. The communication capacity of the Internet can be measured by how much information it can transfer, or how much information it does transfer at any given time, he said.

In 2014, researchers published a study in the journal Supercomputing Frontiers and Innovations estimating the storage capacity of the Internet at 10^24 bytes, or 1 million exabytes. A byte is a data unit comprising 8 bits, and is equal to a single character in one of the words you’re reading now. An exabyte is 1 billion billion bytes.

One way to estimate the communication capacity of the Internet is to measure the traffic moving through it. According to Cisco’s Visual Networking Index initiative, the Internet is now in the “zettabyte era.” A zettabyte equals 1 sextillion bytes, or 1,000 exabytes. By the end of 2016, global Internet traffic will reach 1.1 zettabytes per year, according to Cisco, and by 2019, global traffic is expected to hit 2 zettabytes per year.

One zettabyte is the equivalent of 36,000 years of high-definition video, which, in turn, is the equivalent of streaming Netflix’s entire catalog 3,177 times, Thomas Barnett Jr., Cisco’s director of thought leadership, wrote in a 2011 blog post about the company’s findings.

In 2011, Hilbert and his colleagues published a paper in the journal Science estimating the communication capacity of the Internet at 3 x 10^12 kilobits per second, a measure of bandwidth. This was based on hardware capacity, and not on how much information was actually being transferred at any moment.

In one particularly offbeat study, an anonymous hacker measured the size of the Internet by counting how many IPs (Internet Protocols) were in use. IPs are the wayposts of the Internet through which data travels, and each device online has at least one IP address. According to the hacker’s estimate, there were 1.3 billion IP addresses used online in 2012.

The Internet has vastly altered the data landscape. In 2000, before Internet use became ubiquitous, telecommunications capacity was 2.2 optimally compressed exabytes, Hilbert and his colleagues found. In 2007, the number was 65. This capacity includes phone networks and voice calls as well as access to the enormous information reservoir that is the Internet. However, data traffic over mobile networks was already outpacing voice traffic in 2007, the researchers found.

 

Editor’s note: Original Source: ‘Live Science’


Stephanie Pappas. “How big is the Internet, Really?”

Live Science. N.p., Web. 21 July. 2016.

Technology over-powering human behavior – Pokemon Go

Since the game’s US launch last week, I have personally seen plenty of people on the streets playing Pokémon Go and from what I’ve heard, many can say the same. Based on some initial data, it seems that pretty much no technology comes close to the rate of adoption that this single app has seen in the past few days. It’s been a wild ride to say the least.

The app is still the top download on both app stores, and there have already been dozens of articles across the web telling the stories of many aspiring Pokémon trainers — everything from robberies to sore legs. Pokémon Go has already become a (mostly) global phenomenon and from what we’ve seen so far, it’s technology at its very best.

For the uninitiated, trainers in the Pokémon universe — and, with Pokémon Go, in the real universe as well — roam around capturing Pokémon, battling others, and visiting gyms to level up.

However, this game is a childhood dream come true for many. Pokémon Go is the opportunity to actually become a “Pokémon Master” as it is called and roam the world to capture, collect, and battle. Technology has long made things once deemed science fiction a reality, but, apparently, no dream of personal computers, video calls, or virtual reality, comes close to the feeling of just pure nostalgia.

It’s For Everybody

You can’t help but feel wonder at how far technology has come. Pokémon was created in 1995, with the first Game Boy game coming a year later and the first anime series popping up in 1997. With globalization in full force, the Japanese invention quickly spread around the world.

With 279 million games sold as of February 2016, Pokémon is the second best-selling video game franchise — only behind the Mario series from Nintendo proper. It’s a global franchise and its many iconic characters — including, perhaps most notably, Pikachu — have left a significant mark on pop culture.

Many growing up in the late 90s and early 2000s distinctly remember watching Pokémon as part of early Saturday cartoons. It’s been almost 20 years since, and those early watchers are now in early adulthood, thus making the nostalgia factor super-potent for those who are now some of the most active on social media. It’s not a mystery why the game has spread like wild fire.

Over the past few days, there have been countless examples of people from all backgrounds and ages coming together to play Pokémon Go in the real world. From major metropolitan cities to smaller towns, people on the hunt for Pokémon will recognize those who are also playing the game and end up exchanging a few words. I can attest to this even in my relatively small neighborhood.

It’s remarkable really, helping people do so much as lose weight and get out of their houses — many are even claiming that the game is already helping their mental health.

Encouraging People to be Social

But since the Pokémon universe is innately social, a number of activities can be done as a group in Pokémon Go. As Pokémon are not in limited supply, a bunch of people can go out together and capture the same creature from the exact same location. It limits competition in some regard, but causes the game to be much less confrontational and makes people more willing to share tips.

Half a dozen people playing at a local mall and struck up a conversation. Half of them were carrying battery packs to extend their game play, and one pair said they traveled quite a distance together to come to this mall as it had a number of PokéStops and gyms to claim. As I was riding an escalator, tapping and swiping away on Pokémon Go, a stranger asked what team I was on and we ended up having a quick conversation. Being a normally shy person, I felt surprisingly comfortable to ask other people the same question when I came across them in the park.

There are just so many stories of people telling total strangers tips about where to find Pokémon and striking up conversations — many that extend beyond the Pokémon game as well. In one particularlyfunny example, a player was (assuming the story is true) convinced to join a particular team for purposes of dominating the neighborhood, and a cop joined in as well.

Introducing AR to the Real World

It’s rare for an emerging technology to have an example product that can so perfectly showcase its potential to a wide swath of everyday people. The most obvious use of augmented reality in Pokémon Go is the ability to capture Pokémon against a live camera feed. This has resulted in both funny and rather jarring pictures of Pokémon ending up at the dinner table, at weddings, and even in the midst of protest.

While not as useful as a true heads-up display, this is still augmented reality — and it’s being introduced to a world still mostly unfamiliar with the tech in the most friendly way possible.  When a consumer, compromise-free gadget like HoloLens or the much touted Magic Leap headset is introduced, people will remember Pokémon Go, and games like it could end up being at least one killer use case for the tech.

A Platform for Good

People work, play, and spend a majority of their time in a connected virtual realm through the use of VR headsets. And these virtual- to real-world connections could potentially become very real with games like Pokémon Go.

In addition to in-app purchases imagine Niantic partnering with stores to show advertisements in Pokémon Go. Imagine stores — such as GameStop or Walmart — paying Niantic for a spot on the map to get players in their doors. Assuming the game doesn’t go as fast as it came, there will be many opportunities for the game to evolve into more than just a game over time.

And while outright in-universe advertising might ruin the game, there are some physical real world partnerships that could be struck (again, assuming the game is even still popular once the summer is over and kids everywhere go back to school). What if Niantic partnered with parks, libraries and other safe, open spaces to establish larger gyms or PokéStops? Theoretically, Pokémon Go could have dedicated physical hubs in the real world.

Approximately 20 years after its creation, Pokémon Go gives us a peek at an augmented reality future, but it’s also just a dream come true for many, many fans. Niantic Labs and The Pokémon Company managed to create a smartphone game that, in true Pokémon fashion, incorporates real-world social interaction — dating back to the days of connecting Game Boys together with link cables. And that’s undoubtedly one of the key reasons Pokémon Go has become such a hit.

Yesterday people used to watch at their clock so that they could reach  home and today, all they do is wander at random streets with no time-constraints. Is this game changing the human behavior?  Is technology over-powering human behavior?

Editor’s note: Article inspired from ‘9TO5Google’


Abner Li. “Opinion: Pokémon Go is technology at its absolute best”

9TO5Google. N.p., Web. 14 July. 2016.

Artificial, Artifical Intelligence

COMPUTERS still do some things very poorly. Even when they pool their memory and processors in powerful networks, they remain unevenly intelligent. Things that humans do with little conscious thought, such as recognizing patterns or meanings in images, language or concepts, only baffle the machines.

These lacunae in computers’ abilities would be of interest only to computer scientists, except that many individuals and companies are finding it harder to locate and organize the swelling mass of information that our digital civilization creates.

The problem has prompted a spooky, but elegant, business idea: why not use the Web to create marketplaces of willing human beings who will perform the tasks that computers cannot? Jeff Bezos, the chief executive of Amazon.com, has created Amazon Mechanical Turk, an online service involving human workers, and he has also personally invested in a human-assisted search company called ChaCha. Mr. Bezos describes the phenomenon very prettily, calling it “artificial artificial intelligence.”

Amazon Mechanical Turk (MTurk) is a crowdsourcing Internet marketplace enabling individuals and businesses (known as Requesters) to coordinate the use of human intelligence to perform tasks that computers are currently unable to do. Employers are able to post jobs known as Human Intelligence Tasks (HITs), such as choosing the best among several photographs of a storefront, writing product descriptions, or identifying performers on music CDs. Workers (called Providers in Mechanical Turk’s Terms of Service, or, more colloquially, Turkers) can then browse among existing jobs and complete them in exchange for a monetary payment set by the employer. To place jobs, the requesting programs use an open application programming interface(API), or the more limited MTurk Requester site.

“Normally, a human makes a request of a computer, and the computer does the computation of the task,” he said. “But artificial artificial intelligences like Mechanical Turk invert all that. The computer has a task that is easy for a human but extraordinarily hard for the computer. So instead of calling a computer service to perform the function, it calls a human.”

Mechanical Turk began life as a service that Amazon itself needed. (The name recalls a famous 18th-century hoax, where what seemed to be a chess-playing automaton really concealed a human chess master.) Amazon had millions of Web pages that described individual products, but it wanted to weed out the duplicate pages. Software could help, but algorithmically eliminating all the duplicates was impossible, according to Mr. Bezos. So the company began to develop a Web site where people would look at product pages and be paid a few cents for every duplicate page they correctly identified.

Mr. Bezos figured that what had been useful to Amazon would be valuable to other businesses, too. The company opened Mechanical Turk as a public site in November 2005. Today, there are more than 100,000 “Turk Workers” in more than 100 countries who earn micropayments in exchange for completing a wide range of quick tasks called HITs, for human intelligence tasks, for various companies.

Mechanical Turk’s customers are corporations. By contrast, ChaCha.com, a start-up in Carmel, Ind., uses artificial artificial intelligence — sometimes also called crowdsourcing — to help individual computer users find better results when they search the Web. ChaCha, which began last year, pays 30,000 flesh-and-blood “guides” working from home or the local coffee shop as much as $10 an hour to direct Web surfers to the most relevant resources.

Amazon makes money from Mechanical Turk by charging companies 10 percent of the price of a successfully completed HIT. For simple HITs that cost less than 1 cent, Amazon charges half a cent. ChaCha intends to make money the way most other search companies do: by charging advertisers for contextually relevant links and advertisements.

Harnessing the collective wisdom of crowds isn’t new. It is employed by many of the “Web 2.0” social networks like Digg and Del.icio.us, which rely on human readers to select the most worthwhile items on the Web to read. But creating marketplaces of mercenary intelligences is genuinely novel.

What is it like to be an individual component of these digital, collective minds?

THERE have been two common objections to artificial artificial intelligence. The first, searching on ChaCha, is that the networks are no more intelligent than their smartest members. Katharine Mieszkowski, writing last year on Salon.com, raised the second, more serious criticism. She saw Mechanical Turk as a kind of virtual sweatshop. “There is something a little disturbing about a billionaire like Bezos dreaming up new ways to get ordinary folk to do work for him for pennies,” she wrote.

The ever-genial Mr. Bezos dismisses the criticism. “MTurk is a marketplace where folks who have work meet up with folks who want to do work,” he said.

Why do people become Turk Workers and ChaCha Guides? In poor countries, the money earned could offer a significant contribution to a family’s wealth. But even Mr. Bezos concedes that Turk Workers from rich countries probably can’t live on the small sums involved. “The people I’ve seen commenting on blogs seem mostly to be using MTurk as a supplemental form of income,” he said.

We probably have at least another 25 years before computers are more powerful than human brains, according to the most optimistic artificial intelligence experts. Until then, people will be able to sell their idle brains to the companies and people who need the special processing power that they alone possess through marketplaces like Mechanical Turk and ChaCha.

Editor’s note: Article inspired from ‘NY Times’


Jason Pontin. “Artificial intelligence, with help from humans”

NY Times. N.p., Web. 07 July. 2016.

Drone Data Sparks a New Industrial Revolution

From farming to mining to building, the increasing availability of drones and the information they can map is changing how companies do business.

Businesses are learning that sometimes the best way to boost the bottom line is by reaching for the sky.

Commercial drone usage across a wide variety of industries is exploding as businesses take advantage of rapidly advancing technology and falling hardware prices to incorporate the technology into their work flow.

“Incorporation of commercial drones is going to continue to grow exponentially,” says Darr Gerscovich, senior vice president of marketing at DroneDeploy.

To date, the aerial data consulting company’s clients have used DroneDeploy drone software to map more than 2 million acres across 100 countries. It helps dozens of industries collect and interpret drone data. “We’re seeing a tipping point now, but it’s the first of many tipping points,” he said.

“Businesses are finding a tremendous amount of value in having aerial intelligence,” Gerscovich continued. “Getting data, and making sense of the data.”

In a little more than a year, DroneDeploy clients mapped an area larger than the state of Delaware, and they’re adding aerial data four times faster this year. Drone-captured data, it seems, is in high demand.

More than Google Earth

It’s tempting to think of commercial drone usage as a more detailed version of Google Earth, but the information is far more dynamic.

“Who are the primary users of Google Earth?” Gerscovich asked. “You and me—people with a goal of getting from point A to point B. Roads may change over time, but they typically don’t change that often.”

For Gerscovich’s clients, however, the surveyed areas change constantly.

“We’ve had plenty of examples where Google Earth or another satellite image provider just shows a bunch of trees or a wooded area, and after the drone flight, we see that there’s a full solar power plant there,” he said. “Static imagery is not sufficient.”

(Looking) Down on the Farm

One of the first, and heaviest, users of commercial drones is the agriculture industry.

“Farms have hundreds or thousands of acres,” Gerscovich explained. “They largely use drones for crop scouting. It saves the time of someone going out and driving around the fields, which is one of the ways it’s been done until now.”

Instead, a drone can fly over the entire area and spot which fields farmers need to pay attention to, rather than relying on what can be seen from the nearest driving path. Growers can then upload the images to the cloud and knit them together to make a map showing the condition of an entire crop.

“You can see the entire field and identify the areas where there’s an issue,” Gerscovich said. “During growing season, they’re trying to catch issues while there’s still time to address them.”

The condition of a crop can change with a few days of rain or dry weather, so multiple drone passes are necessary to provide a constant stream of data.

Data Mining and Construction Site Insights

The mining and construction industries have also been early and avid adopters of drone technology. While farms need quick maps of large areas, building and digging sites typically are smaller, but the need for detail is much higher.

“Generally, they want to understand site progress,” Gerscovich said. “In order to get daily or monthly status updates on the stage a project is in, for a large site, it used to take a half a day to walk the entire site. Now, they can do it in 15 minutes with a drone.”

Job sites also tend to make heavy use of 3D modeling, something that can be built from detailed drone data.

“If you’re building a tower, and you’re six months into the project, you can verify the structure is being developed according to plan,” Gerscovich said, explaining that the 3D image can then be loaded into the construction company’s autoCAD system to compare the progress to the building plans.

“It helps people on site, and it also helps people back in the corporate offices to understand what’s happening,” he said.

Aerial data can also measure volume. At construction and mining sites where there are often stockpiles of moved dirt mounds or cement materials, Gerscovich said, drones can give accurate measures of just how large the mound is. Compared to other methods, such as having people climb to the top of the mound with lasers to attempt to measure it, drone technology has its advantages.

“Drones are safer, faster and about half the cost as compared to traditional ground-based volumetrics,” said Dallas VanZanten, owner of aerial mapping company Skymedia Northwest.

Inspection Gadget

An emerging market for drone technology is the inspection industry.

A DroneDeploy client in Mexico was contracted by the government to inspect 600 miles of road. Instead of employing aircraft or spending weeks driving and manually capturing data across the countryside, the company used a handful of drones and quickly produced more than eight terabytes of data.

How much is that? If the Mexican company used 16 GB smartphones, the highway data would have filled 512 of them.

Building inspectors are using drones to get a better look at the roof. Insurance companies, Gerscovich added, can use the resulting 3D images to assess damages.

“Say a tornado comes through an area,” he continued. “Instead of waiting for the claims inspector to arrive, they could fly over the area with a drone and quickly do a 3D model.”

Emergency response teams also incorporate aerial data. Drones can quickly create high-resolution maps of large areas, in, say, a wooded area, for search and rescue operations. Drones can even assist forensic specialists who need to inspect large plane or train crash sites.

“Before the inspectors arrive with cameras to start taking still images, they can create a 3D model, and then everything about the area is preserved,” Gerscovich said. “They can use it to measure distances and angles between things.”

Growth Continues to Skyrocket

In the early days of commercial drone usage, only the largest companies could afford to collect aerial data. Technology has helped lower the price of entry.

Engineering consultant Iain Butler, better known as The UAV Guy, raves that drones are, “a disruptive technology. Literally anyone can crop scout with a drone and get actionable data within minutes.”

Just a couple years ago, most of the drones used commercially were custom-made, with a price tag of $10,000 to $20,000. DroneDeploy said today companies can pay far less.

“The hardware has gotten so good, so quickly, that today a majority of drones used commercially are bought off the shelf—high-end consumer drones,” Gerscovich said.

Today, an $800 to $1,500 investment is enough to get a business airborne and collecting data.

The biggest hurdle to using consumer drones is that the batteries typically last about 30 minutes. That’s long enough to map between 60 and 80 acres before running out of power.

“Having said that, we’re seeing agricultural companies doing very large maps with off-the-shelf quad copters,” Gerscovich says. “We had one client map 4,300 acres with a quad copter. That’s 3,500 football fields—a massive effort.”

It would also take more than 35 hours and 70 battery changes. “Obviously, they’re doing this because they’re seeing substantial value. Otherwise, no one would be out there doing it for that long,” he said.

Still, companies in various industries are beginning to understand the value in the sky, and they’re finding innovative ways to use drones and help their businesses soar.

 

Editor’s note: Article reposted from ‘Drone Blog’


Shawn Krest. “Drone Data Sparks a New Industrial Revolution”

Drone Blog. N.p., Web. 30 June. 2016.