Proliferated growth in Machine learning counterparts challenges Silicon Technology.

The rise of artificial intelligence and impending end of Moore’s law means silicon chips are nearing the end of the line. Here are some alternatives.

SILICON has been making our computers work for almost half a century. Whether designed for graphics or number crunching, all information processing is done using a million-strong horde of tiny logic gates made from element number 14.

But silicon’s time may soon be up. Moore’s law – the prophecy which dictates that the number of silicon transistors on microprocessors doubles every two years – is grinding to a halt because there is a limit to how many can be squeezed on a chip.

The machine-learning boom is another problem. The amount of energy silicon-based computers use is set to soar as they crunch more of the massive data sets that algorithms in this field require. The Semiconductor Industry Association estimates that, on current trends, computing’s energy demands will outstrip the world’s total energy supply by 2040.

So research groups all over the world are building alternative systems that can handle large amounts of data without using silicon. All of them strive to be smaller and more power efficient than existing chips.

Unstable computing

Julie Grollier leads a group at the UMPhy lab near Paris that looks at how nanodevices can be engineered to work more like the human brain. Her team uses tiny magnetic particles for computation, specifically pattern recognition.

When magnetic particles are really small they become unstable and their magnetic fields start to oscillate wildly. By applying a current, the team has harnessed these oscillations to do basic computations. Scaled up, Grollier believes the technology could recognize patterns far faster than existing techniques.

It would also be less power-hungry. The magnetic auto-oscillators Grollier works with could use 100 times less power than their silicon counterparts. They can be 10,000 times smaller too.

Igor Carron, who launched Paris-based start-up LightOn in December, has another alternative to silicon chips: light.

Carron won’t say too much about how his planned LightOn computers will work, but they will have an optical system that processes bulky and unwieldy data sets so machine learning algorithms can deal with them more easily. It does this using a mathematical technique called random projection. This method has been known about since 1984, but has always involved too many computations for silicon chips to handle. Now, Carron and his colleagues are working on a way to do the whole operation with light.

“On current trends, computing’s energy demands could outstrip total supply by 2040“

What will these new ways of processing and learning from data make possible? Carron thinks machines that can learn without needing bulky processors will allow wearable computing to take off. They could also make the emerging “internet of things” – where computers are built into ordinary objects – far more powerful. These objects would no longer need to funnel data back and forth to data centres for processing. Instead, they will be able to do it on the spot.

Devices such as Grollier’s and Carron’s aren’t the only ones taking an alternative approach to computation. A group at Stanford University in California has built a chip containing 178 transistors out of carbon nanotubes, whose electrical properties make them more efficient switches than silicon transistors. And earlier this year, researchers at Ben-Gurion University in Israel and the Georgia Institute of Technology used DNA to build the world’s smallest diode, an electronic component used in computers.

For the time being, high-power silicon computers that handle massive amounts of data are still making huge gains in machine learning. But that exponential growth cannot continue forever. To really tap into and learn from all the world’s data, we will need learning machines in every pocket. Companies such as Facebook and Google are barely scratching the surface. “There’s a huge haul of data banging on their door without them being able to make sense of it,” says Carron.

 

Editor’s note: Original Source: ‘NewScientist’

This article appeared in print under the headline “Making light work of AI”


Hal Hodson. “Move over silicon: Machine learning boom means we need new chips”

NewScientist. N.p., Web. 24 August. 2016.

Cybersecurity as chess match: A new approach for governments

Cyber threats are growing in volume, intensity, and sophistication, and they aren’t going away—ever. And recent failures call into question the effectiveness of the billions already sunk into cybersecurity.

How can government agencies reverse the growing gap between security investment and effectiveness? Traditionally, cybersecurity has focused on preventing intrusions, defending firewalls, monitoring ports, and the like. The evolving threat landscape, however, calls for a more dynamic approach.

Whether it’s an inside or external threat, organizations are finding that building firewalls is less effective than anticipating the nature of threats—studying malware in the wild, before it exploits a vulnerability.

The evolving nature of cyber threats calls for a collaborative, networked defense, which means sharing information about vulnerabilities, threats, and remedies among a community of governments, companies, and security vendors. Promoting this kind of exchange between the public and private sectors was a key aim of the US Cyber Security Act of 2012.

Australia has taken a significant lead in working across government and the private sector to shore up collective defenses. The Australian Cyber Security Centre (ACSC) plays many roles, raising awareness of cybersecurity, reporting on the nature and extent of cyber threats, encouraging reporting of incidents, analyzing and investigating specific threats, coordinating national security operations, and heading up the Australian government’s response to hacking incidents. At its core, it’s a hub for information exchange: Private companies, state and territorial governments, and international partners all share discoveries at the ACSC.

The Australian approach begins with good network hygiene: blocking unknown executable files, automatically installing software updates and security patches on all computers, and restricting administrative privileges.

The program then aims to assess adversaries, combining threat data from multiple entities to strengthen collective intelligence. The system uploads results of intrusion attempts to the cloud, giving analysts from multiple agencies a larger pool of attack data to scan for patterns.

Cybersecurity experts have long valued collective intelligence, perhaps first during the 2001 fight against the Li0n worm, which exploited a vulnerability in computer connections.[i] A few analysts noticed a spike in probes to port 53, which supports the Domain Name Service, the system for naming computers and network servers organized around domains. They warned international colleagues, who collaborated on a response. Soon, a system administrator in the Netherlands collected a sample of the worm, which allowed other experts to examine it in a protected testing environment, a “sandbox.” A global community of security practitioners then identified the worm’s mechanism and built a program to detect infections. Within 14 hours, they had publicized their findings widely enough to defend computers worldwide.

A third core security principle is to rethink network security. All too often, leaders think of it as a wall. But a Great Wall can be scaled—a Maginot Line can be avoided. Fixed obstacles are fixed targets, and that’s not optimal cyber defense. Think of cybersecurity like a chess match: Governments need to deploy their advantages and strengths against their opponents’ disadvantages and weaknesses.

Perpetual unpredictability is the best defense. Keep moving. Keep changing. No sitting; no stopping. Plant fake information. Deploy “honeypots” (decoy servers or systems). Move data around. If criminals get in, flood them with bad information

The goal is to modify the defenses so fast that hackers waste money and time probing systems that have already changed. Savvy cybersecurity pros understand this: The more you change the game, the more your opponents’ costs go up, and the more your costs go down. Maybe they’ll move on to an easier target.

Agencies need to learn to love continuous change. New problems will arise. There’ll always be work.

This challenge for governments resembles that facing military strategists as their primary roles shift from war against established nations to continual skirmishes against elusive, unpredictable non-state actors. Your government will inevitably lose some cybersecurity skirmishes, but that doesn’t mean it’s failed. It’s a given that not every encounter will end in victory.

The important test lies in how government officials anticipate and counter moves by an ever-shifting cast of criminal adversaries.

Digital governments will need speed, dexterity, and adaptability to succeed on this new battlefield.

 

Editor’s note: Original Source: ‘Washington Technology’


William D. Eggers. “Cybersecurity as chess match: A new approach for governments”

Washington Technology. N.p., Web. 12 August. 2016.

The rising tide of zero-code development

In 2016, government systems integrators continue to battle a wide-range of margin-squeezing challenges that stem from decreased federal spending.

They are tasked with developing demanding next-generation solutions in the mobile, big data and cloud computing areas.  However, it is often difficult to deliver acceptable technology solutions within budget.

The core issue is that it involves a significant investment to develop customized solutions and systems tailored to unique program requirements.  Systems integrators and their customers need to employ technical advantages that enable them to solve problems and implement field advanced technology at a similar or lower level of effort.

Fortunately, the pace of commercial innovation is such that opportunities exist for systems integrators that were not even options in the very recent past.  They can now can leverage tools such as automated application factories that produce customizable mobile applications for a fraction of the investment in coding and development required in the past.

In fact, these low-code and zero-code solutions allow companies to rapidly build and deploy fully customized applications that are tailored to meet the unique business and workflow requirements of government. End users, without software or engineering training, can literally create mobile apps with custom forms, maps and features – all from a simple, graphical interface.

This is not just a modest improvement of the status quo; rather it is a completely disruptive innovation that dramatically lowers the cost of fielding high-end, tailored software solutions.

Enterprises can now build apps without requiring the expertise, expense and ongoing maintenance of commercial software.  Also, for service providers, it is possible to develop and private-label these apps in ways that demonstrate premium brand value without investing in mobile app development services or staff.

And, the government customer wins.

Government IT continues to face budget scrutiny at a time when their innovations are most needed for mission success. These new zero code applications allow the customer to rapidly build iOS, Android and web apps that are fully-customized to meet any need.

Zero code apps go beyond the “low code” platforms, which are becoming more common in the corporate enterprise space – especially for business process management (BPM) solutions. The challenge with these “low-code” applications is that they still require a level of software and engineering expertise to enable “citizen developers.”  Conversely, zero code applications literally do not require any coding and can be built by end users.

Of course, there will always be situations where more complex capabilities are required and extend outside the existing feature set available from zero code platforms.  But now for time being, we have limited the scope of systems integration and isolated engineering effort (man-hours and budget) to only those areas.   Further, as these new zero code apps continue to expand the catalog of available features, the adaptation and customization costs will continue to shrink.

Ultimately, by offering these types of zero-code applications as part of technical solutions, we can help the government customer and the system integrator.  Government stakeholders and end users get the fully-customized application they need. The IT department and the systems integrator become heroes, delivering solutions at a fraction of the cost of traditional software development.

In the end, everyone truly wins.

 

Editor’s note: Original Source: ‘Washington Technology’


John Timar. “Get ready for the rising tide of zero-code development”

Washington Technology. N.p., Web. 4 August. 2016.

US Military have introduced its very own unmanned submarine hunter

Image Credits: DARPA

We all are aware of what submarines are capable of. In the past submarines were the biggest factors which shaped-up the war. Now with technological advancements, the US Military have introduced its very own unmanned submarine hunter. The ocean’s newest predator, a robotic ship designed to help the U.S. military hunt enemy submarines, has completed its first tests at sea.

Called the “Sea Hunter,” the 132-foot (40 meters) unmanned vessel is still getting its figurative sea legs, but the performance tests off the coast of San Diego have steered the project on a course to enter the U.S. Navy’s fleet by 2018, according to the Defense Advanced Research Projects Agency (DARPA), the branch of the U.S. Department of Defense responsible for developing new technologies for the military.

The Sea Hunter “surpassed all performance objectives for speed, maneuverability, stability, sea-keeping, acceleration/deceleration and fuel consumption,” representatives from Leidos, the company developing the Sea Hunter, said in a statement.

The autonomous submarine-hunting ship was christened in April, and is part of a DARPA initiative to expand the use of artificial intelligence in the military. The drone ship’s mission will be to seek out and neutralize enemy submarines, according to the agency.

Initial tests required a pilot on the ship, but the Sea Hunter is designed for autonomous missions.

“When the Sea Hunter is fully operational, it will be able to stay at sea for three months with no crew and very little remote control, which can be done from thousands of miles away,” Leidos officials said in the statement.

Advanced artificial intelligence software will continuously navigate the Sea Hunter safely around other ships and in rough waters, according to DARPA. The technology also allows for remote guidance if a specific mission requires it.

“It will still be sailors who are deciding how, when and where to use this new capability and the technology that has made it possible,” Scott Littlefield, DARPA program manager, said in a statement when the Sea Hunter was christened.

The Sea Hunter still faces a two-year test program, co-sponsored by DARPA and the Office of Naval Research. Leidos said upcoming tests will include assessments of the ship’s sensors, the vessel’s autonomous controls and more.

Other DARPA projects being driven by AI include a potential robot battlefield manager that helps decide the next move in a space war, and an AI technology that could decode enemy messages during air reconnaissance missions.

The world’s first unmanned ship completed its first performance tests, and is set to join the US Navy in 2018 to hunt enemy submarines lurking in the deep.

 

Editor’s note: Original Source: ‘Live Science’

Image Credits: DARPA


Kacey Deamer. “US Military’s Robotic Submarine Hunter Completes First Tests at Sea”

Live Science. N.p., Web. 4 August. 2016.