Protect yourself from new FOIA rules as it opens more risks of disclosure

Last summer, Congress passed and President Obama signed into law the FOIA Improvement Act of 2016 (Public Law No. 114-185), which adds to and amends the Freedom of Information Act.

The amendments create a “presumption of openness” limiting the federal government’s discretionary power to withhold requested information only when disclosure would result in “foreseeable harm.”

For those that transact business with or even simply communicate with the government (referred to as “submitters” in FOIA parlance), the FOIA changes mean that submitters such as government contractors and grant recipients must proactively respond when a FOIA request potentially targets confidential and/or proprietary data that has been shared with the government.

Importantly, the 2016 FOIA improvement Act did not change FOIA Exemption 4, which protects from disclosure “trade secrets and commercial or financial information obtained from a person [that is] privileged or confidential.” Under Exemption 4, the government is prohibited from disclosing trade secrets or other proprietary/confidential information that any submitter has shared with the government.

Unlike with some of the other FOIA exemptions, in their interpretation of Exemption 4, courts have determined that the government lacks any discretion to disclose trade secret or commercial confidential/proprietary information in response to a FOIA request.

The 2016 FOIA Improvement Act was passed to accelerate the FOIA process and to compel government FOIA officials to provide as much information as soon as possible in response to a FOIA request. The act now imposes a penalty (i.e., the waiver of the statutory FOIA fees) on the agency for failing to provide a timely FOIA response. The act also requires that the FOIA response segregate exempt information from releasable information in the same document, as an agency can no longer simply refuse to produce any document containing exempt information.

In addition, the Act requires the agency to produce electronic copies of documents/data, which can be instantly disseminated by the requesting party, rather than paper documents, in response to a FOIA request.

Furthermore, the act requires the creation of a federal government FOIA portal that allows the same FOIA request to be simultaneously submitted to multiple agencies. As a result, submitters must be poised to respond immediately as soon as the government provides notice that a FOIA request seeks disclosure of the submitter’s data and/or documents.

As an initial step, whenever any person or entity first shares information/data with the government that it does not want disclosed to any third party, the title page and each subsequent page of the confidential document or data should be plainly marked as containing “confidential and proprietary information which is exempt from disclosure under FOIA.”

Next, when the agency contacts the submitter (as FOIA requires) to tell them that a request seeks the disclosure of their information, the submitter should promptly respond by identifying:

  1. The specific information within each responsive document that is exempt from disclosure.
  2. The particular FOIA exemption (there are nine) that prohibits disclosure (as stated above, Exemption 4 protects trade secrets and confidential/proprietary data)
  3. Why that exemption applies to each identified section of data/information that the submitter seeks to protect.

Also, the submitter (or submitter’s counsel) should attempt to maintain an open dialogue with the assigned agency FOIA official throughout the FOIA process to promptly address and resolve any disagreements about what should and should not be disclosed before the agency takes a final disclosure position, which is often difficult to unwind.

Finally, the submitter must be ready to assert a “reverse FOIA” action to prevent the disclosure of trade secrets or other confidential/proprietary information in the event that the agency disregards the submitter’s exemption recommendations before the agency releases the submitter’s trade secrets and confidential information in response to a FOIA request.

Editor’s note: Original Source ‘Washington Technology’


Doug Proxmire. “New FOIA rules open contractors to more risks of disclosure”

Washington Technology. N.p., Web. 17 February. 2017.

Never Let Down Your Computer Virus Awareness

Operating in today’s internet shrouded atmosphere is getting to be like playing in one of those first person shooter video games where the most aware succeed and the oblivious become chowder. Everyone is at risk from the high profile business to the private user. Even government and industrial networks of various countries have taken big hits from an array of dangerous computer viruses to hit the internet since its inception.

So, indeed, you are in a sort of wild-west arena when you logon, an aptitude for recognizing threats has had to become a staple any business, government or private user can’t be without. Having a top of the line anti-virus software will go a long way towards creating your force field. However, you still have to possess the skill to maneuver around the computer bombs that are there if you “click it” and some, these days, don’t even require a “click”.

Protecting today’s on-line atmosphere is no short of big business. The hackers will keep on trying and the anti-virus companies will keep developing the revisions of their software to combat them. This threat, and its apparent, will always be out there. Hackers are getting more sophisticated and complex as the clock ticks as well. While it is unclear whether the powers that be thought in depth of the attacks that could happen, the launch of the internet was definitely the future. The earliest hacks and implementation of viruses no doubt, had to originate from an individual with an idea to cause havoc. This practice caught like wildfire and created some of the worst viruses in the short history of the internet.

From the early 1990’s on, dangerous and damaging viruses have shocked the world. Take the virus “NIMDA” for instance. In 2001, a week after the 9/11 attack, this virus affected millions of computers. NIMDA’s main thrust was to slow down Internet traffic resulting in widespread network shutdowns. Another, in 2006 was a malicious Trojan horse program called “Storm Worm”. Storm Worm   suckered users into clicking on a subject line in their email; “230 die as storm batters Europe”. Of course the subject line was a fraud and users clicked on the fake link which would enable the perpetrator offsite    to operate a PC remotely. They utilized this path to send spam throughout the Internet. It was estimated Storm Worm affected 10 million PCs.

In 1998, one of the most destructive of viruses came to play. The “CIH” or “Chernobyl virus” infected the Windows 95 and 98 executable file and remained in the machines memory. It would constantly infect other executables within the machine. It is estimated that the CIH virus caused 250 million worth of destruction. 1999 brought in a macro-virus called “Melissa” it was a mass mailer virus that activated in the machine when the user clicked on an email link. The email came from a known source so it would appear ok, especially when the title was ” here is the document you asked for don’t show anyone else “The virus would then immediately seek out the first 50 users listed in the the users Outlook address book and email itself to them. This virus was one of the first utilized in email attachment, “Melissa “caused an estimated $300-$600 million in damage.

And it went on, in 2003, the “SQL Slammer” or “Sapphire” virus targeted servers by generating random IP addresses and discharging itself  this worm affected many businesses, banks and community operations including significant services provided by Bank of America; Continental Airlines and Seattle’s 911 emergency system to name a few. Estimated losses were between 950 million and 1 .2 billion.

Others such as the “Code Red” virus in 2001 activated on July 13 of that year. This virus did not require you to open an email attachment. It simply needed an open Internet connection and then gave you an opening webpage that said “Hacked by Chinese”; it brought down an estimated 400,000 servers including the White House Web server. Its damaging effect is estimated $2.6 billion loss.  The “SobigF” virus got in machines by an email telling the user that they have a security issue, when opened the intruder sends itself and traps the entire address book. This virus replicated itself to the tune of infecting millions of PC’s world-wide. Damages were estimated in the 3-4 billion range.

The first on to do the most damage was the “My Doom” or “Novarg” virus. On 26 January 2000 this virus circled the globe via email swiftly and 152 million computers and countless servers went on the blip. Creation of a huge “denial of service attack” and crippled computer atmospheres causing damages worldwide estimated at 30 billion.

Recent dangerous viruses have been “Poison Ivy” a remote access Trojan were the perpetrator uses backdoor technology to infect the user’s computer .Once installed the perpetrator has control of everything including record audio and video. This virus targets personal information to compromise identities that were proven to be bought and sold globally. This included online banking, shopping, and social security number and birth information reaping.

Cornficker, in 200i is a worm that targeted stealing financial data. A   very complex, difficult to stop virus, Cornficker caused the creation of a coalition of experts decided to stop it.  It was also called “Superbug”. The fact that this virus got into where it wanted and was able to do just about everything stumped. Cornficker has reconfigured several times and each time its effects are more sophisticated. Incredibly, the perpetrators have designed it to track the efforts taken to eradicate it. .

We have a very unique responsibility, being “on-line”. The internet is, at this point, just like any town on the map. There are places to go; there are places not to go. There are places in the internet that you might have to go to that are laced with lurking hackers just waiting for users to make that fateful
“click”. A good part of the battle can be waged here by just being vigilant. While you’re doing your financials, the stock market, shopping and all the day-to-day things that technology has provided you with that “one touch” to get to.

While aptitude to recognizing the “baddies” out there is a strong first suite, you’ll need help. The root of your defense lies in making sure you have a good Anti-Virus program, making sure it is always running and also your virus database is updated very often. Most of the anti-virus software out there has options for making all these concerns automatic. If you do, make sure you check they are all running as scheduled periodically. There are viruses that serve as precursors to bigger threats. What they do is literally turn-off all your anti-virus mechanisms without you knowing it until it’s too late.

So, be careful out there in your computing. Learn the signs that there is something amiss and act on it before taking another click. Once you get to know the common “this doesn’t look right” occurrence, the harder ones to recognize will be more understandable. One tip here, personnel users (because most businesses will not let users do this) DO NOT download any .EXE (executable) program or file without running it through the scanner. You might just be saving your computers life.


 

Brian J. Schweikert “Never Let Down Your Computer Virus Awareness”

Sabre88 LLC. N.p., Web. 19 October. 2016.

Editor’s note: Original Sources;

http://www.crn.com/news/security/190300322/the-10-most-destructive-pc-viruses-of-all-time.htm
http://listdose.com/top-10-most-dangerous-computer-viruses-ever/
http://www.smithsonianmag.com/science-nature/top-ten-most-destructive-computer-viruses-159542266/?no-ist

 

Secret Behind Artificial Intelligence’s Preposterous Power

Spookily powerful artificial intelligence (AI) systems may work so well because their structure exploits the fundamental laws of the universe, new research suggests.

The new findings may help answer a longstanding mystery about a class of artificial intelligence that employ a strategy called deep learning. These deep learning or deep neural network programs, as they’re called, are algorithms that have many layers in which lower-level calculations feed into higher ones. Deep neural networks often perform astonishingly well at solving problems as complex as beating the world’s best player of the strategy board game Go or classifying cat photos, yet know one fully understood why.

It turns out, one reason may be that they are tapping into the very special properties of the physical world, said Max Tegmark, a physicist at the Massachusetts Institute of Technology (MIT) and a co-author of the new research.

The laws of physics only present this “very special class of problems” — the problems that AI shines at solving. “This tiny fraction of the problems that physics makes us care about and the tiny fraction of problems that neural networks can solve are more or less the same.

Last year, AI accomplished a task many people thought impossible: DeepMind, Google’s deep learning AI system, defeated the world’s best Go player after trouncing the European Go champion. The feat stunned the world because the number of potential Go moves exceeds the number of atoms in the universe, and past Go-playing robots performed only as well as a mediocre human player.

But even more astonishing than DeepMind’s utter rout of its opponents was how it accomplished the task.

“The big mystery behind neural networks is why they work so well,” said study co-author Henry Lin, a physicist at Harvard University. “Almost every problem we throw at them, they crack.”

For instance, DeepMind was not explicitly taught Go strategy and was not trained to recognize classic sequences of moves. Instead, it simply “watched” millions of games, and then played many, many more against itself and other players.

Like newborn babies, these deep-learning algorithms start out “clueless,” yet typically outperform other AI algorithms that are given some of the rules of the game in advance.

Another long-held mystery is why these deep networks are so much better than so-called shallow ones, which contain as little as one layer. Deep networks have a hierarchy and look a bit like connections between neurons in the brain, with lower-level data from many neurons feeding into another “higher” group of neurons, repeated over many layers. In a similar way, deep layers of these neural networks make some calculations, and then feed those results to a higher layer of the program, and so on, he said.

To understand why this process works, Tegmark and Lin decided to flip the question on its head.

“Suppose somebody gave you a key. Every lock you try, it seems to open. One might assume that the key has some magic properties. But another possibility is that all the locks are magical. In the case of neural nets, I suspect it’s a bit of both,” Lin said.

One possibility could be that the “real world” problems have special properties because the real world is very special, Tegmark said.

Take one of the biggest neural-network mysteries: These networks often take what seem to be computationally hairy problems, like the Go game, and somehow find solutions using far fewer calculations than expected.

It turns out that the math employed by neural networks is simplified thanks to a few special properties of the universe. The first is that the equations that govern many laws of physics, from quantum mechanics to gravity to special relativity, are essentially simple math problems, Tegmark said. The equations involve variables raised to a low power (for instance, 4 or less).

What’s more, objects in the universe are governed by locality, meaning they are limited by the speed of light. Practically speaking, that means neighboring objects in the universe are more likely to influence each other than things that are far from each other, Tegmark said.

Many things in the universe also obey what’s called a normal or Gaussian distribution. This is the classic “bell curve” that governs everything from traits such as human height to the speed of gas molecules zooming around in the atmosphere.

Finally, symmetry is woven into the fabric of physics. Think of the veiny pattern on a leaf, or the two arms, eyes and ears of the average human. At the galactic scale, if one travels a light-year to the left or right, or waits a year, the laws of physics are the same, Tegmark said.

All of these special traits of the universe mean that the problems facing neural networks are actually special math problems that can be radically simplified.

“If you look at the class of data sets that we actually come across in nature, they’re way simpler than the sort of worst-case scenario you might imagine,” Tegmark said.

There are also problems that would be much tougher for neural networks to crack, including encryption schemes that secure information on the web; such schemes just look like random noise.

“If you feed that into a neural network, it’s going to fail just as badly as I am; it’s not going to find any patterns,” Tegmark said.

While the subatomic laws of nature are simple, the equations describing a bumblebee flight are incredibly complicated, while those governing gas molecules remain simple, Lin added. It’s not yet clear whether deep learning will perform just as well describing those complicated bumblebee flights as it will describing gas molecules, he said.

“The point is that some ’emergent’ laws of physics, like those governing an ideal gas, remain quite simple, whereas some become quite complicated. So there is a lot of additional work that needs to be done if one is going to answer in detail why deep learning works so well.” Lin said. “I think the paper raises a lot more questions than it answers!”

 

Editor’s note: Original Source ‘LiveScience’


Tia Ghose. “The Spooky Secret Behind Artificial Intelligence’s Incredible Power”

LiveScience. N.p., Web. 7 October. 2016.

Sabre88, LLC Breaks Into Top 15 Among ICIC and Fortune’s Inner City 100 Winners

 

Annual ranking showcases the fastest-growing urban businesses in America

For the third time in as many years The Initiative for a Competitive Inner City (ICIC) and Fortune have announced that Sabre88 has been selected for its prestigious 2016 Inner City 100 list. This recognition places Sabre88 in an exemplary lineage of nearly 900 fast-growing and innovative inner city businesses.

Sabre88 ranked 14 overall on the list of 100. Sabre88, which provides consulting services to the federal government, reported 2015 revenues of 2.7 million and a five-year growth rate of 731 percent from 2011-2015.  “We are delighted to earn a spot on the list of fastest growing inner city businesses.  It is a testament to the hard work and dedication of the Sabre88 team serving our government customers each day.”  stated CEO Robert Cottingham.

ICIC’s Inner City 100 is an annually compiled and released list featuring high-power, high-potential businesses from around the country with headquarters in inner cities. Each company is selected by ICIC with help from a national network of nominating partners who seek to identify, spotlight, and further enable the named companies’ innovative urban entrepreneurship. Ranked by revenue growth, the esteemed recipients go on to have their names published in Fortune.

The list can be viewed on the Fortune website here.

In addition to announcing the list, company CEOs were invited to gather for a full-day event featuring thought-provoking sessions, insightful leadership advice, and robust networking opportunities. Past winners have reported meeting future multi-million dollar investors as a result of appearing on the Inner City 100 list and attending the accompanying colloquium.

The rankings for each company were announced at the Inner City 100 Conference and Awards Ceremony on Wednesday, September 14, 2016 at the Aloft Hotel in Boston, MA. Before the awards celebration, winners gathered for a full-day business symposium featuring management case studies from Harvard Business School professors and interactive sessions with top CEOs. Keynote speakers at this year’s event included Interim CEO of Staples Shira Goodman, Chairman and CEO of Pinnacle Group and Inner City 100 alumnus Nina Vaca, and Harvard Business School Professor and ICIC Founder and Chairman Michael E. Porter.  Other speakers included Corey Thomas, CEO of Rapid 7, Loren Feldman of Forbes, Lynda Applegate and Amy Edmondson from Harvard Business School,  John Stuart of PTC, Robert Wallace, CEO of Bithenergy, and Brook Colangelo of Houghton Mifflin Harcourt.

“We are extraordinarily proud of these pioneering entrepreneurs who lead the way in economic revitalization in America’s inner cities,” says Steve Grossman, CEO of ICIC, of the list of 100.

The Inner City 100 program recognizes and supports successful inner city business leaders, and celebrates their role in providing innovation and job creation in America’s cities. These companies strengthen local American economies, provide job opportunities for underrepresented communities, and drive forward economic and social development.

Boasting an average five-year growth rate of 458 percent between 2011 and 2015, the 2016 Inner City 100 winners represent a wide span of geography, hailing from 42 cities and 25 states. Collectively, the winners employed 7,324 people in 2015, and on average over a third of their employees live in the same neighborhood as the company.

Highlights of the 2016 Inner City 100 include:

  • Employ 7,324 workers total in 2015.
  • Created 4,696 new jobs in the last five years.
  • On average, 34% of employees live in same neighborhood as the company.
  • Average company age is 16 years.
  • Average 2015 revenue is $12.2 million.
  • 34% are women-owned.
  • 37% are minority-owned.
  • 6% of the winners are certified B-Corps.
  • 26 industries represented in the top 100.

# # #

Company description:  Sabre88 is a global consulting firm applying capabilities in financial services, billing support, FOIA, IT Help Desk Support, Data Entry and Document Scanning to government and commercial clients. With more than twenty years of combined personnel experience offering strategic solutions, Sabre88 staff advance the firm’s mission to provide civilian and defense agencies of the government with the necessary tools to address emerging challenges. Sabre88 was formed in January of 2008, with a mission to serve both civilian and defense agencies of the federal government. The founder, Robert Cottingham, Jr., started the firm out of a government need for innovative small businesses which provide a 100% customer focused service.

Inner City 100 Methodology: The Initiative for a Competitive Inner City (ICIC) defines inner cities as core urban areas with higher unemployment and poverty rates and lower median incomes than their surrounding metropolitan statistical areas. Every year, ICIC identifies, ranks, and spotlights the 100 fastest-growing businesses located in America’s inner cities. In 2016, Companies were ranked by revenue growth over the five-year period between 2011 and 2015. This list was audited by the independent accounting firm Rucci, Bardaro, and Falzone, PC.

Initiative for a Competitive Inner City (ICIC)

ICIC is a national nonprofit founded in 1994 by Harvard Business School professor Michael E. Porter. ICIC’s mission is to promote economic prosperity in America’s inner cities through private sector investment that leads to jobs, income and wealth creation for local residents. Through its research on inner city economies, ICIC provides businesses, governments and investors with the most comprehensive and actionable information in the field about urban market opportunities. The organization supports urban businesses through the Inner City 100, Inner City Capital Connections and the Goldman Sachs 10,000 Small Businesses programs. Learn more at www.icic.org or @icicorg.

 

FOR IMMEDIATE RELEASE

Contact:

Benjamin Bratton
973-321-4886
bbratton@sabre88.com

Matt Camp, ICIC
(617) 238-3014
mcamp@icic.org

Google’s plan for computer supremacy

 

The field of quantum computing is undergoing a rapid shake-up, and engineers at Google have quietly set out a plan to dominate

SOMEWHERE in California, Google is building a device that will usher in a new era for computing. It’s a quantum computer, the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers.

The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first.

“They are definitely the world leaders now, there is no doubt about it,” says Simon Devitt at the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

We have had a glimpse of Google’s intentions. Last month, its engineers quietly published a paper detailing their plans (arxiv.org/abs/1608.00263). Their goal, audaciously named quantum supremacy, is to build the first quantum computer capable of performing a task no classical computer can.

“It’s a blueprint for what they’re planning to do in the next couple of years,” says Scott Aaronson at the University of Texas at Austin, who has discussed the plans with the team.

So how will they do it? Quantum computers process data as quantum bits, or qubits. Unlike classical bits, these can store a mixture of both 0 and 1 at the same time, thanks to the principle of quantum superposition. It’s this potential that gives quantum computers the edge at certain problems, like factoring large numbers. But ordinary computers are also pretty good at such tasks. Showing quantum computers are better would require thousands of qubits, which is far beyond our current technical ability.

Instead, Google wants to claim the prize with just 50 qubits. That’s still an ambitious goal – publicly, they have only announced a 9-qubit computer – but one within reach.

“It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong”

To help it succeed, Google has brought the fight to quantum’s home turf. It is focusing on a problem that is fiendishly difficult for ordinary computers but that a quantum computer will do naturally: simulating the behavior of a random arrangement of quantum circuits.

Any small variation in the input into those quantum circuits can produce a massively different output, so it’s difficult for the classical computer to cheat with approximations to simplify the problem. “They’re doing a quantum version of chaos,” says Devitt. “The output is essentially random, so you have to compute everything.”

To push classical computing to the limit, Google turned to Edison, one of the most advanced supercomputers in the world, housed at the US National Energy Research Scientific Computing Center. Google had it simulate the behaviour of quantum circuits on increasingly larger grids of qubits, up to a 6 × 7 grid of 42 qubits.

This computation is difficult because as the grid size increases, the amount of memory needed to store everything balloons rapidly. A 6 × 4 grid needed just 268 megabytes, less than found in your average smartphone. The 6 × 7 grid demanded 70 terabytes, roughly 10,000 times that of a high-end PC.

Google stopped there because going to the next size up is currently impossible: a 48-qubit grid would require 2.252 petabytes of memory, almost double that of the top supercomputer in the world. If Google can solve the problem with a 50-qubit quantum computer, it will have beaten every other computer in existence.

Eyes on the prize

By setting out this clear test, Google hopes to avoid the problems that have plagued previous claims of quantum computers outperforming ordinary ones – including some made by Google.

Last year, the firm announced it had solved certain problems 100 million times faster than a classical computer by using a D-Wave quantum computer, a commercially available device with a controversial history. Experts immediately dismissed the results, saying they weren’t a fair comparison.

Google purchased its D-Wave computer in 2013 to figure out whether it could be used to improve search results and artificial intelligence. The following year, the firm hired John Martinis at the University of California, Santa Barbara, to design its own superconducting qubits. “His qubits are way higher quality,” says Aaronson.

It’s Martinis and colleagues who are now attempting to achieve quantum supremacy with 50 qubits, and many believe they will get there soon. “I think this is achievable within two or three years,” says Matthias Troyer at the Swiss Federal Institute of Technology in Zurich. “They’ve showed concrete steps on how they will do it.”

Martinis and colleagues have discussed a number of timelines for reaching this milestone, says Devitt. The earliest is by the end of this year, but that is unlikely. “I’m going to be optimistic and say maybe at the end of next year,” he says. “If they get it done even within the next five years, that will be a tremendous leap forward.”

The first successful quantum supremacy experiment won’t give us computers capable of solving any problem imaginable – based on current theory, those will need to be much larger machines. But having a working, small computer could drive innovation, or augment existing computers, making it the start of a new era.

Aaronson compares it to the first self-sustaining nuclear reaction, achieved by the Manhattan project in Chicago in 1942. “It might be a thing that causes people to say, if we want a full-scalable quantum computer, let’s talk numbers: how many billions of dollars?” he says.

Solving the challenges of building a 50-qubit device will prepare Google to construct something bigger. “It’s absolutely progress to building a fully scalable machine,” says Ian Walmsley at the University of Oxford.

For quantum computers to be truly useful in the long run, we will also need robust quantum error correction, a technique to mitigate the fragility of quantum states. Martinis and others are already working on this, but it will take longer than achieving quantum supremacy.

Still, achieving supremacy won’t be dismissed.

“Once a system hits quantum supremacy and is showing clear scale-up behaviour, it will be a flare in the sky to the private sector,” says Devitt. “It’s ready to move out of the labs.”

“The field is moving much faster than expected,” says Troyer. “It’s time to move quantum computing from science to engineering and really build devices.”

 

Editor’s note: Original Source ‘NewScientist’

This article appeared in print under the headline “Google plans quantum supremacy”


Jacob Aron. “Revealed: Google’s plan for quantum computer supremacy”

NewScientist. N.p., Web. 31 August. 2016.

Proliferated growth in Machine learning counterparts challenges Silicon Technology.

The rise of artificial intelligence and impending end of Moore’s law means silicon chips are nearing the end of the line. Here are some alternatives.

SILICON has been making our computers work for almost half a century. Whether designed for graphics or number crunching, all information processing is done using a million-strong horde of tiny logic gates made from element number 14.

But silicon’s time may soon be up. Moore’s law – the prophecy which dictates that the number of silicon transistors on microprocessors doubles every two years – is grinding to a halt because there is a limit to how many can be squeezed on a chip.

The machine-learning boom is another problem. The amount of energy silicon-based computers use is set to soar as they crunch more of the massive data sets that algorithms in this field require. The Semiconductor Industry Association estimates that, on current trends, computing’s energy demands will outstrip the world’s total energy supply by 2040.

So research groups all over the world are building alternative systems that can handle large amounts of data without using silicon. All of them strive to be smaller and more power efficient than existing chips.

Unstable computing

Julie Grollier leads a group at the UMPhy lab near Paris that looks at how nanodevices can be engineered to work more like the human brain. Her team uses tiny magnetic particles for computation, specifically pattern recognition.

When magnetic particles are really small they become unstable and their magnetic fields start to oscillate wildly. By applying a current, the team has harnessed these oscillations to do basic computations. Scaled up, Grollier believes the technology could recognize patterns far faster than existing techniques.

It would also be less power-hungry. The magnetic auto-oscillators Grollier works with could use 100 times less power than their silicon counterparts. They can be 10,000 times smaller too.

Igor Carron, who launched Paris-based start-up LightOn in December, has another alternative to silicon chips: light.

Carron won’t say too much about how his planned LightOn computers will work, but they will have an optical system that processes bulky and unwieldy data sets so machine learning algorithms can deal with them more easily. It does this using a mathematical technique called random projection. This method has been known about since 1984, but has always involved too many computations for silicon chips to handle. Now, Carron and his colleagues are working on a way to do the whole operation with light.

“On current trends, computing’s energy demands could outstrip total supply by 2040“

What will these new ways of processing and learning from data make possible? Carron thinks machines that can learn without needing bulky processors will allow wearable computing to take off. They could also make the emerging “internet of things” – where computers are built into ordinary objects – far more powerful. These objects would no longer need to funnel data back and forth to data centres for processing. Instead, they will be able to do it on the spot.

Devices such as Grollier’s and Carron’s aren’t the only ones taking an alternative approach to computation. A group at Stanford University in California has built a chip containing 178 transistors out of carbon nanotubes, whose electrical properties make them more efficient switches than silicon transistors. And earlier this year, researchers at Ben-Gurion University in Israel and the Georgia Institute of Technology used DNA to build the world’s smallest diode, an electronic component used in computers.

For the time being, high-power silicon computers that handle massive amounts of data are still making huge gains in machine learning. But that exponential growth cannot continue forever. To really tap into and learn from all the world’s data, we will need learning machines in every pocket. Companies such as Facebook and Google are barely scratching the surface. “There’s a huge haul of data banging on their door without them being able to make sense of it,” says Carron.

 

Editor’s note: Original Source: ‘NewScientist’

This article appeared in print under the headline “Making light work of AI”


Hal Hodson. “Move over silicon: Machine learning boom means we need new chips”

NewScientist. N.p., Web. 24 August. 2016.

Cybersecurity as chess match: A new approach for governments

Cyber threats are growing in volume, intensity, and sophistication, and they aren’t going away—ever. And recent failures call into question the effectiveness of the billions already sunk into cybersecurity.

How can government agencies reverse the growing gap between security investment and effectiveness? Traditionally, cybersecurity has focused on preventing intrusions, defending firewalls, monitoring ports, and the like. The evolving threat landscape, however, calls for a more dynamic approach.

Whether it’s an inside or external threat, organizations are finding that building firewalls is less effective than anticipating the nature of threats—studying malware in the wild, before it exploits a vulnerability.

The evolving nature of cyber threats calls for a collaborative, networked defense, which means sharing information about vulnerabilities, threats, and remedies among a community of governments, companies, and security vendors. Promoting this kind of exchange between the public and private sectors was a key aim of the US Cyber Security Act of 2012.

Australia has taken a significant lead in working across government and the private sector to shore up collective defenses. The Australian Cyber Security Centre (ACSC) plays many roles, raising awareness of cybersecurity, reporting on the nature and extent of cyber threats, encouraging reporting of incidents, analyzing and investigating specific threats, coordinating national security operations, and heading up the Australian government’s response to hacking incidents. At its core, it’s a hub for information exchange: Private companies, state and territorial governments, and international partners all share discoveries at the ACSC.

The Australian approach begins with good network hygiene: blocking unknown executable files, automatically installing software updates and security patches on all computers, and restricting administrative privileges.

The program then aims to assess adversaries, combining threat data from multiple entities to strengthen collective intelligence. The system uploads results of intrusion attempts to the cloud, giving analysts from multiple agencies a larger pool of attack data to scan for patterns.

Cybersecurity experts have long valued collective intelligence, perhaps first during the 2001 fight against the Li0n worm, which exploited a vulnerability in computer connections.[i] A few analysts noticed a spike in probes to port 53, which supports the Domain Name Service, the system for naming computers and network servers organized around domains. They warned international colleagues, who collaborated on a response. Soon, a system administrator in the Netherlands collected a sample of the worm, which allowed other experts to examine it in a protected testing environment, a “sandbox.” A global community of security practitioners then identified the worm’s mechanism and built a program to detect infections. Within 14 hours, they had publicized their findings widely enough to defend computers worldwide.

A third core security principle is to rethink network security. All too often, leaders think of it as a wall. But a Great Wall can be scaled—a Maginot Line can be avoided. Fixed obstacles are fixed targets, and that’s not optimal cyber defense. Think of cybersecurity like a chess match: Governments need to deploy their advantages and strengths against their opponents’ disadvantages and weaknesses.

Perpetual unpredictability is the best defense. Keep moving. Keep changing. No sitting; no stopping. Plant fake information. Deploy “honeypots” (decoy servers or systems). Move data around. If criminals get in, flood them with bad information

The goal is to modify the defenses so fast that hackers waste money and time probing systems that have already changed. Savvy cybersecurity pros understand this: The more you change the game, the more your opponents’ costs go up, and the more your costs go down. Maybe they’ll move on to an easier target.

Agencies need to learn to love continuous change. New problems will arise. There’ll always be work.

This challenge for governments resembles that facing military strategists as their primary roles shift from war against established nations to continual skirmishes against elusive, unpredictable non-state actors. Your government will inevitably lose some cybersecurity skirmishes, but that doesn’t mean it’s failed. It’s a given that not every encounter will end in victory.

The important test lies in how government officials anticipate and counter moves by an ever-shifting cast of criminal adversaries.

Digital governments will need speed, dexterity, and adaptability to succeed on this new battlefield.

 

Editor’s note: Original Source: ‘Washington Technology’


William D. Eggers. “Cybersecurity as chess match: A new approach for governments”

Washington Technology. N.p., Web. 12 August. 2016.

The rising tide of zero-code development

In 2016, government systems integrators continue to battle a wide-range of margin-squeezing challenges that stem from decreased federal spending.

They are tasked with developing demanding next-generation solutions in the mobile, big data and cloud computing areas.  However, it is often difficult to deliver acceptable technology solutions within budget.

The core issue is that it involves a significant investment to develop customized solutions and systems tailored to unique program requirements.  Systems integrators and their customers need to employ technical advantages that enable them to solve problems and implement field advanced technology at a similar or lower level of effort.

Fortunately, the pace of commercial innovation is such that opportunities exist for systems integrators that were not even options in the very recent past.  They can now can leverage tools such as automated application factories that produce customizable mobile applications for a fraction of the investment in coding and development required in the past.

In fact, these low-code and zero-code solutions allow companies to rapidly build and deploy fully customized applications that are tailored to meet the unique business and workflow requirements of government. End users, without software or engineering training, can literally create mobile apps with custom forms, maps and features – all from a simple, graphical interface.

This is not just a modest improvement of the status quo; rather it is a completely disruptive innovation that dramatically lowers the cost of fielding high-end, tailored software solutions.

Enterprises can now build apps without requiring the expertise, expense and ongoing maintenance of commercial software.  Also, for service providers, it is possible to develop and private-label these apps in ways that demonstrate premium brand value without investing in mobile app development services or staff.

And, the government customer wins.

Government IT continues to face budget scrutiny at a time when their innovations are most needed for mission success. These new zero code applications allow the customer to rapidly build iOS, Android and web apps that are fully-customized to meet any need.

Zero code apps go beyond the “low code” platforms, which are becoming more common in the corporate enterprise space – especially for business process management (BPM) solutions. The challenge with these “low-code” applications is that they still require a level of software and engineering expertise to enable “citizen developers.”  Conversely, zero code applications literally do not require any coding and can be built by end users.

Of course, there will always be situations where more complex capabilities are required and extend outside the existing feature set available from zero code platforms.  But now for time being, we have limited the scope of systems integration and isolated engineering effort (man-hours and budget) to only those areas.   Further, as these new zero code apps continue to expand the catalog of available features, the adaptation and customization costs will continue to shrink.

Ultimately, by offering these types of zero-code applications as part of technical solutions, we can help the government customer and the system integrator.  Government stakeholders and end users get the fully-customized application they need. The IT department and the systems integrator become heroes, delivering solutions at a fraction of the cost of traditional software development.

In the end, everyone truly wins.

 

Editor’s note: Original Source: ‘Washington Technology’


John Timar. “Get ready for the rising tide of zero-code development”

Washington Technology. N.p., Web. 4 August. 2016.

US Military have introduced its very own unmanned submarine hunter

Image Credits: DARPA

We all are aware of what submarines are capable of. In the past submarines were the biggest factors which shaped-up the war. Now with technological advancements, the US Military have introduced its very own unmanned submarine hunter. The ocean’s newest predator, a robotic ship designed to help the U.S. military hunt enemy submarines, has completed its first tests at sea.

Called the “Sea Hunter,” the 132-foot (40 meters) unmanned vessel is still getting its figurative sea legs, but the performance tests off the coast of San Diego have steered the project on a course to enter the U.S. Navy’s fleet by 2018, according to the Defense Advanced Research Projects Agency (DARPA), the branch of the U.S. Department of Defense responsible for developing new technologies for the military.

The Sea Hunter “surpassed all performance objectives for speed, maneuverability, stability, sea-keeping, acceleration/deceleration and fuel consumption,” representatives from Leidos, the company developing the Sea Hunter, said in a statement.

The autonomous submarine-hunting ship was christened in April, and is part of a DARPA initiative to expand the use of artificial intelligence in the military. The drone ship’s mission will be to seek out and neutralize enemy submarines, according to the agency.

Initial tests required a pilot on the ship, but the Sea Hunter is designed for autonomous missions.

“When the Sea Hunter is fully operational, it will be able to stay at sea for three months with no crew and very little remote control, which can be done from thousands of miles away,” Leidos officials said in the statement.

Advanced artificial intelligence software will continuously navigate the Sea Hunter safely around other ships and in rough waters, according to DARPA. The technology also allows for remote guidance if a specific mission requires it.

“It will still be sailors who are deciding how, when and where to use this new capability and the technology that has made it possible,” Scott Littlefield, DARPA program manager, said in a statement when the Sea Hunter was christened.

The Sea Hunter still faces a two-year test program, co-sponsored by DARPA and the Office of Naval Research. Leidos said upcoming tests will include assessments of the ship’s sensors, the vessel’s autonomous controls and more.

Other DARPA projects being driven by AI include a potential robot battlefield manager that helps decide the next move in a space war, and an AI technology that could decode enemy messages during air reconnaissance missions.

The world’s first unmanned ship completed its first performance tests, and is set to join the US Navy in 2018 to hunt enemy submarines lurking in the deep.

 

Editor’s note: Original Source: ‘Live Science’

Image Credits: DARPA


Kacey Deamer. “US Military’s Robotic Submarine Hunter Completes First Tests at Sea”

Live Science. N.p., Web. 4 August. 2016.

Tiny ‘Atomic Memory’ Device Could Store All Books Ever Written

A new “atomic memory” device that encodes data atom by atom can store hundreds of times more data than current hard disks can, a new study finds.

You would need just the area of a postage stamp to write out all books ever written,” said study senior author Sander Otte, a physicist at the Delft University of Technology’s Kavli Institute of Nanoscience in the Netherlands.

In fact, the researchers estimated that if they created a cube 100 microns wide — about the same diameter as the average human hair — made of sheets of atomic memory separated from one another by 5 nanometers, or billionths of a meter, the cube could easily store the contents of the entire U.S. Library of Congress.

As the world generates more data, researchers are seeking ways to store all of that information in as little space as possible. The new atomic memory devices that researchers developed can store more than 500 trillion bits of data per square inch (6.45 square centimeters) — about 500 times more data than the best commercial hard disk currently available, according to the scientists who created the new devices.

The scientists created their atomic memory device using a scanning tunneling microscope, which uses an extremely sharp needle to scan over surfaces just as a blind person would run his or her fingers over a page of braille to read it. Scanning tunneling microscope probes can not only detect atoms, but also nudge them around.

Computers represent data as 1s and 0s — binary digits known as bits that they express by flicking tiny, switch-like transistors either on or off. The new atomic memory device represents each bit as two possible locations on a copper surface; a chlorine atom can slide back and forth between these two positions, the researchers explained.

“If the chlorine atom is in the top position, there is a hole beneath it — we call this a 1,” Otte said in a statement. “If the hole is in the top position and the chlorine atom is therefore on the bottom, then the bit is a 0.” (Each square hole is about 25 picometers, or trillionths of a meter, deep.)

The bits are separated from one another by rows of other chlorine atoms. These rows could keep the bits in place for more than 40 hours, the scientists found. This system of packing atoms together is far more stable and reliable than atomic memory strategies that employ loose atoms, the researchers said.

These atoms were organized into 127 blocks of 64 bits. Each block was labeled with a marker of holes. These markers are similar to the QR codes now often used in ads and tickets. These markers can label the precise location of each block on the copper surface.

The markers can also label a block as damaged; perhaps this damage was caused by some contaminant or flaw in the copper surface — about 12 percent of blocks are not suitable for data storage because of such problems, according to the researchers. All in all, this orderly system of markers could help atomic memory scale up to very large sizes, even if the copper surface the data is encoded on is not entirely perfect, they said.

All in all, the scientists noted that this proof-of-principle device significantly outperforms current state-of-the-art hard drives in terms of storage capacity.

As impressive as creating atomic memory devices is, Otte said that for him, “The most important implication is not at all the data storage itself.”

Instead, for Otte, atomic memory simply demonstrates how well scientists can now engineer devices on the level of atoms. “I cannot, at this point, foresee where this will lead, but I am convinced that it will be much more exciting than just data storage,” Otte said.

“Just stop and think for a moment how far we got as humans that we can now engineer things with this amazing level of precision, and wonder about the possibilities that it may give,” Otte said.

Reading a block of bits currently takes about 1 minute, and rewriting a block of bits currently requires about 2 minutes, the researchers said. However, they noted that it’s possible to speed up this system by making probes move faster over the surfaces of these atomic memory devices, potentially for read-and-write speeds on the order of 1 million bits per second.

Still, the researchers cautioned that atomic memory will not record data in large-scale data centers anytime soon. Currently, these atomic memory devices only work in very clean vacuum environments where they cannot become contaminated, and require cooling by liquid nitrogen to supercold temperatures of minus 321 degrees Fahrenheit (minus 196 degrees Celsius, or 77 kelvins) to prevent the chlorine atoms from jittering around.

Still, such temperatures are “easier to obtain than you may think,” Otte said. “Many MRI scanners in hospitals are already kept at 4 kelvins (minus 452 degrees Fahrenheit, or minus 269 degrees Celsius) permanently, so it is not at all inconceivable that future storage facilities in data centers could be maintained at [liquid nitrogen temperatures].”

Future research will investigate different combinations of materials that may help atomic memory’s “stability at higher temperatures, perhaps even room temperature,” Otte said.

The scientists detailed their findings online on July 18th in the journal Nature Nanotechnology.

 

Editor’s note: Original Source: ‘Live Science’


Charles Q. Choi. “Tiny ‘Atomic Memory’ Device Could Store All Books Ever Written”

Live Science. N.p., Web. 28 July. 2016.