Federal Agencies Hope to Bid Farewell to Conventional Passwords

No matter how clever and well-constructed your current passwords may be, they may become obsolete under new guidance for federal system authentication. Indeed, in a recent GitHub public preview document, the National Institute of Standards and Technology (NIST) says it will offer dramatic changes to its guidelines for federal agencies’ digital authentication methods.

In its new approach, NIST is transforming its current approach to identity-proofing to best suit the current Office of Management and Budget (OPM) guidance by helping agencies choose the most ultraprecise digital authentication technologies. This approach includes differentiating individual components of identity verification into inconspicuous, component elements. Using NIST’s process, individuals would establish their identity through what is called identity assurance and validate their credentials to gain entry into a given system through authenticator assurance—possibly a chip card or encrypted identity card (www.FCW.com).

Furthermore, to ensure absolute security, the document states that passwords could become entirely numeric as security experts believe that combining digits, letters and symbols in conventional passwords has thus far proved insignificant in protecting user information despite the impact on usability and memorability. Contrastingly, the NIST advises that passwords be tested against a list of unacceptable passwords. Unacceptable passwords are identified as those used in previous breaches, dictionary words, specific words, and specific names that users are most like to choose.

To further guarantee security and protection, users will not be able to have a password “hint” that is ultimately accessible to unauthenticated personnel. In other words, the familiar “first elementary school” or “name of first pet” password prompt will cease to exist.

Although these changes to password security will take place among federal agencies, many Americans will not have this level of user authentication. Thus, the infographic below includes a variety of useful tips and instruction on how to create a breach-proof password:

According to the NIST, these technologically advanced guidelines for password security and user authentication “should have a tested equal error rate of 1 in 1,000 or better, with a false-match rate of 1 in 1,000 or better” (www.FCW.com). When the NIST implements these new guidelines, federal government user data will not only have a greater level of security, it will also offer unprecedented protection to national confidential data from malicious data breaches, hackers, and cyber-attacks.

Road to Super Intelligence

Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die!

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced.

“We are on the edge of change comparable to the rise of human life on Earth” — Vernor Vinge

There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. For example, AI will create new jobs while phasing out some old ones—magnifying the importance of programs like TechHire that are preparing our workforce with the skills to get ahead in today’s economy, and tomorrow’s. AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery—adding to the challenge of predicting and controlling how complex technologies will behave.

There are tremendous opportunities and an array of considerations across the Federal Government in privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.

That is why the White House Office of Science and Technology Policy announced public workshops over the coming months on topics in AI to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology.

The Federal Government also is working to leverage AI for public good and toward a more effective government. A new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence will meet for the first time next week. This group will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

Broadly, between now and the end of the Administration, the NSTC group will work to increase the use of AI and machine learning to improve the delivery of government services. Such efforts may include empowering Federal departments and agencies to run pilot projects evaluating new AI-driven approaches and government investment in research on how to use AI to make government services more effective. Applications in AI to areas of government that are not traditionally technology-focused are especially significant; there is tremendous potential in AI-driven improvements to programs and delivery of services that help make everyday life better for Americans in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, the environment, and much more.

Editor’s note: Ideas inspired from,


Ed Felten. “Preparing for the future of Artificial Intelligence– WhiteHouse”

WhiteHouse.gov. N.p., Web. 5 May. 2016.

Most Wanted: Catching a Cybercriminal

Most in the United States are aware that the worst criminal offenders around the globe appear on the Federal Bureau of Investigation’s (FBI) “Most Wanted Fugitives” list, the “Most Wanted Terrorists” list, or the “Wanted by the FBI” podcast. But what about cybercriminals? As hackers and cybercriminals become more advanced in their hacking techniques, the FBI’s investigative team is less concerned with the identity of the perpetrator than they are in preventing access to its system in the first place. However, the FBI has recently begun to target the individual perpetrators of cybercrime.

Given the ubiquity of cybercrime in the age of technology, identifying cybercriminals on the agencies’ list of “Most Wanted Cybercriminals” has grown considerably since March. In fact, the list grew by nearly 50 percent when two young Syrians were charged for attempting to hack United States companies and media organizations followed by the indictment of seven Iranian citizens accused of coordinating a months-long cyber attack on financial organizations located in New York. When Attorney General Loretta Lynch announced the aforementioned indictments, she said the decision to provide public access to the most-wanted cybercriminals is a “new approach” at the of Justice that falls in line with its name-and-shame campaign (www.nextgov.com). The campaign Department, which launched in 2012, placed five Chinese hackers on the cyber most-wanted list and so far all of those listed are men and are mostly foreign nationals.

The infographic below explains why the United States needs more cybersecurity professionals to thwart cybercriminals:

As we currently live in the age of digital technology, the United States hopes to take a more concerted efforts to protect both public and private data. Without taking such proactive measures to provide protection, the United States digital infrastructure may increasingly become the target of malicious cybercriminal breaches, hacks, and cyberattacks.

 

 

Open Data can do more than what government thinks it could.

Government thinks open data is an add-on that boosts transparency, but it’s more than that. Most open data portals don’t look like labors of love. They look like abandoned last-minute science fair project. The current open data movement is more than a decade old, but some are still asking why they should even bother.

“Right now, it is irrational for almost anybody who works in government to open data. It makes no sense,” Waldo Jaquith said. “Most people, it’s not in their job description to open data — they’re just the CIO. So if he fails to open data, worst case, nothing bad happens. But if he does open some data and it has PII [personally identifying information], then his worst case is that he’s hauled before a legislative subcommittee, grilled, humiliated and fired.”

Though perhaps it’s not immediately apparent, Jaquith is the director of U.S. Open Data and one of the movement’s most active advocates. Open data is struggling to gain financial and spiritual backing. Open data may fizzle out within the next two years, said Jaquith, and a glance at government’s attitude toward the entire “open” concept supports that timeline.

The people who are really into open data — like Jaquith — aren’t the fad-following type. Open data’s disciples believe in it because they’ve seen that just a little prodding in the right spots can make a big difference. In 2014, Jaquith bought a temporary license for Virginia’s business registration data for $450 and published the records online. That data wasn’t just news to the public — it had been kept from Virginia’s municipal governments too. Before that, the state’s municipal governments had no way of knowing which businesses existed within their boundaries and, therefore, they had no way of knowing which businesses weren’t paying license fees and property taxes. Jaquith estimated (“wildly,” he admits) that this single data set is worth $100 million to Virginia’s municipal governments collectively.

The disconnect between the massive operational potential that open data holds and government’s slow movement toward harnessing it can be explained simply. Government thinks open data is an add-on that boosts transparency, but it’s more than that. Open data isn’t a $2 side of guacamole that adds flavor to the burrito. It’s the restaurant’s mission statement.

Here are six ideas that can help government more fully realize open data’s transformative power.

1. RECONSIDER YOUR DATA’S PURPOSE

Open data isn’t just about transparency and economic development. If it were, those things would have happened by now. People still largely don’t know what their governments are doing and no one’s frequenting their city’s open data portal to find out — they read the news. Open data portals haven’t stopped corruption; the unscrupulous simply reroute their activities around the spotlight. And if anyone’s using open data to build groundbreaking apps that improve the world and generate industry, they’re doing a great job keeping it a secret. For government, open data is about working smarter.

“I’m tired of the argument of ‘Oh, it will unlock value to the private sector,’” Jaquith said. “That’s nice. I hope people make billions of dollars off of that. But nobody in any government is going to spend any real amount of time on all the work that goes into opening all the data sets on a sustainable, complete basis because some stranger somewhere might get rich.”

Open data’s most basic advantage is that it makes life easier for government workers. Information that’s requested regularly can be put online, freeing workers to do other tasks. At its best, open data uncovers interjurisdictional insights that save money and improve operations. And no matter how tenuous, peripheral bonuses like transparency and economic development are still there too. Governments aren’t gaining the benefits of open data today because there’s not been a rigorous effort to integrate the concept of openness into public-sector work.

One unnamed city that ranks respectably in the U.S. City Open Data Census has more than 1,000 records on its open data portal. But only 132 of those records are data sets and 86 of those data sets are pieces of a single budget that have been split apart. This is a common practice across the public sector and one that reveals intent. For the most part, governments aren’t publishing their data because they know it’s a useful resource that ought to be easily accessible, well curated, neat and current so that it can be used by all. It’s because 1,000 sounds better than 50 when an official is giving a speech or addressing stakeholders, and they’re not the ones who have to use it.

2. CONSUME YOUR OWN OPEN DATA

Governments use data. Open data portals are designed for displaying and sharing information in an organized way. Therefore, governments should use a tool designed for the thing they’re trying to do. Even putting aside the “open” concept, public-sector offices around the nation would benefit hugely from having a common, shared pool of data they can draw upon when they need reliable information. Putting the data online is the most practical way to do that — and it also happens to meet the political dictates of transparency — but government should be doing this for its own sake.

“The most common mistake I see governments make with open data is thinking that publication is the end of the activity, rather than beginning of the activity,” said Dan O’Neil, executive director of the Smart Chicago Collaborative. “Because publishing data can be, if we live in a perfect world, simply a prefatory step to allowing residents to talk about how data affects their lives and helps them live better. But usually, what happens is they publish data and they run as fast as they can in the other direction.”

3. PLAN BEYOND TECHNOLOGY

Open data has outgrown the novelty phase, and that means it needs organizational and policy support to survive. It needs comprehensive planning and believers who will act. People wouldn’t be giving up much if they abandoned open data today, O’Neil said, because open data hasn’t done much. The tragedy of giving up now, he said, would purely be a loss of prospect, because open data could change the world if the focus were shifted away from technology and toward the needs of the people.

An organization called City Bureau is attempting to encourage young non-white people to become reporters in an attempt to restore balance to journalistic coverage on the south and west sides of Chicago. Another journalistic endeavor on Chicago’s South Side called Invisible Institute serves as a watchdog organization that uses investigative reporting, litigation and public discussion to further its civil rights goals. O’Neil’s world is one of civic tech and social justice, but regardless of whether a person supports these particular groups ideologically, everyone can learn from their approach.

“That’s where it’s at,” O’Neil said. “Getting data that isn’t open and making it open and then having an actual community strategy around analyzing not just the data, but the social justice issues around the general milieu.”

Government needs to do the same if open data is to find meaning. Just putting data online and hoping for the best isn’t wrong, but it doesn’t do much. Open data needs a clear plan, and it needs to come from a wide patronage within government.

“The most common mistake is focusing on the project over the practice,” said Will Saunders, Washington state’s open data guy (his actual title). “It’s always attractive to have an executive sponsor, and a lot of times open data projects get started as a transparency commitment, as ‘a hallmark of my administration’ kind of thing. [Sometimes] you wind up having a diligent, small group of folks who facilitate the publication of data and then if there’s a leadership change in three or four years, then a lot of the sustainability just isn’t there.”

4. AUTOMATE SLOWLY

Washington could be publishing three to four times more data than it is today, Saunders said, but the state doesn’t because longevity through automation ensures the efforts will stick.

“Program managers know that they can and should publish, and when they do, they tend to link it to their own programmatic goals as opposed to a specific political commitment,” he said. “What I typically do is work with agencies to see if there’s a way I can encourage them to make publication part of their program design, and if I can’t, then I wait for another day.”

This approach is slower, but like proper diet and exercise, https://dietstrict.com/ experts recommend it because it works.

Open data’s relevance will grow only if efforts mature. In Washington and elsewhere, data sets are often used for purposes different from what was originally intended. Opportunities to repurpose data will appear more frequently as the information becomes better organized, shared and understood. One severe obstacle to that prospect is that today there exist few standard schemas for publishing data. Roads, for instance, cross every boundary the nation has, and yet road data takes a new format in each jurisdiction. Today, without standards, a large project that uses open road data sounds like more trouble than it’s worth.

5. COLLABORATE ON THE CREATION OF PUBLISHING STANDARDS

Government has a hard time following publishing standards today because not many exist. ThePresident’s Task Force on 21st Century Policing is developing some standards for police data, Data.gov is working toward a standard that will let companies like Uber publish their ride data meaningfully, and programs like Bloomberg’s What Works Cities initiative are positioned to develop standards across city lines. Comprehensive and accessible publishing rules would reduce the work required of freeing data sets, and it would solve many of today’s data sharing and comprehension snags.

6. TRUST YOUR EXPERTS

The public isn’t qualified to tell the government how it should be using its data, because the public doesn’t understand government. Most people think “the government” means the president or Congress. No one understands the challenges of government better than those who run it and those are the people who should guide the use of public-sector data.

Utah is growing its open data automation daily under the guidance of experts. The technology office monitors which data sets its offices need and educates stakeholders on how to use that information. The state auditor, the health-care system and external data requesters are among those learning, said Dave Fletcher, Utah’s CTO.

“Increasingly we’re working on an initiative that we’re calling data-driven government to make better decisions based on data,” Fletcher said, adding that they share statewide data with counties so information like graduation rates, unemployment rates, taxes and air quality measures are easily accessed by commissioners.

Drew Mingl, Utah’s open data coordinator, said people are grateful to have a definitive centralized source of state information that can yield new insights. Data now being drawn from the state’s Medicare system, for example, showed a $25,000 deviation in the cost of hip replacement surgery in two neighboring counties.

“People are now making better, more informed decisions because we’ve put all this state data in one place where they can get access to it,” Mingl said.

Los Angeles runs one of the best open data portals in the nation. It ranks first on the U.S. City Open Data Census, with nearly 100 percent of the city’s data open to the public. It’s not perfect, but what it has, it gained through the knowledge of the city’s experienced workers.

Ted Ross, general manager of L.A.’s Information Technology Agency, said the city wanted three things from its portal: a way for average citizens to view data casually, capabilities for data scientists who wanted to do more with the data, like download it or use APIs, and the ability to integrate federated data sets from across systems. Contracting a vendor was the easiest way to reach those goals, Ross said, so rather than develop the portal in-house, that’s what Los Angeles did.

The city listens to the people who use data most to guide its efforts: journalists, researchers, officials and technology staff, Ross said. This feedback ensures the city’s doing more than fulfilling a political mandate, he said.

L.A. has done more with its data than leave it dangling. Vision Zero, a multinational road safety program, promotes roadway design to reduce pedestrian injury and death, and it’s powered by the city’s open data.

“We worked with USC, who volunteered about 25 graduate-level data science students and three professors, and we basically analyzed for causation and commonality, and trends relating to those, and they can help identify some of the high-value networks,” Ross said. “That’s a prime example of taking open data and … using it as a platform to interact with a local university and actually identify information and insight that’s being leveraged to save lives.”

Open data doesn’t need to save lives — and it usually won’t. Its value is in supporting the core functions of government, which are basic things like keeping parks and water clean and trash cans empty, said Josh Baron, applications delivery manager for Ann Arbor, Mich., and that should be the goal of everyone who works in government.

“Our No. 1 job,” Baron said, “is to support the lines of business who are out there making the city a wonderful place to live.”

 

Editor’s note:


Colin Wood. “6 ideas to help government realize open data’s transformative power– GovTech”

GovTech. N.p., Web. 21 Apr. 2016.

Want a Better GDP? Close the Gender-Wage Gap

On April 12, 2016, the United States celebrated Equal Pay Day, a day that symbolizes how far into the year women must work to earn what men earned in the previous year (www.pay-equity.org). Although closing the gender pay gap has made progress over the past few decades since the Equal Pay Act (EPA) was passed in 1963, the United States has still works to achieve gender-wage equality. Not only is equal pay a step forward for women, but studies show it would also be beneficial to the United States’ gross domestic product (GDP) and the economy as a whole.

In a recent report published by the McKinsey Global Institute (MGI), findings show that greater gender parity in the workplace—in terms of pay, hours worked, and access to full-times jobs—the greater the benefit the country’s overall economy (www.govexec.com). The findings in the report strongly recommend that both government and businesses take a more proactive stance in effectuating gender equality. Currently, economists are concerned that as America’s population ages and retires, there will not be enough young workers to take their place, which would have a harmful effect on the economy, as there would be fewer people to provide goods and services, to work and earn wages, as well as lower levels of productivity. Each of these factors would likely culminate in a slowing of GDP growth (www.govexec.com).

In spite of economists’ worries, the GDP will not suffer if employers aim to bridge the pay gap by making more room for women and paying them the same wages as men in the workforce. At present, women work fewer hours, mostly in lower-paying sectors, and have a lower labor force participation rate than men. However, if employers increase women’s labor force participation and assist them with entering and staying in more lucrative and highly-productive jobs, it will be easier to maintain current levels of economic activity and production even as the aging population retires, which will ultimately prevent economic deceleration in the United States.

Although the infographic below was published in 2012, the information is still relevant to the issue of pay disparity in 2016:

The McKinsey report provides number estimates on how the current gender pay gap could be closed. According to the report, if by the year 2025 women are paid equally as men, work the same number of hours as men, and are represented equally in every sector, an additional $4.3 trillion could be added to the United States GDP. This number is 20 percent greater than in a business-as-usual scenario, which does not account for closing the gender pay gap. Since this number is a high estimate given that women’s paid labor would have to precisely echo that of a man’s paid labor, McKinsey researchers also created a more plausible scenario in which each U.S. state matches the level of pay with other states currently making the greatest progress toward gender-wage equality. In this situation, an additional $2.1 trillion could be added to the GDP by 2025.

While Equal Pay Day was established by the National Committee on Pay Equity (NCPE) in 1996 as a public awareness event, women, men and the economy may rather want to make this a celebration of the past the moment gender-wage equality becomes a fact of existence in the United States.

Natural Disaster Crises? Technology May be the Answer

Whether it be a tornado, tsunami, earthquake, monsoon, hurricane, flood, or any other natural phenomena, no one person can be fully prepared for the aftermath of such disasters. Even with around-the-clock efforts from dedicated responders, disaster victims most always outnumber the help that is available to them, fostering a sense of unfair importance for which victims are priority versus those who can hold out just a little longer. Luckily, One Concern, Inc.—a startup that earned a coveted spot on GovTech100, the top 100 companies focused on government customers—aims to be one of the first to utilize artificial intelligence to save lives through analytical disaster assessment and calculated damage estimates.

The idea of One Concern was born from CEO and co-founder, Ahmad Wani, whose hometown of Kashmir, India is located in a region that is especially prone to earthquakes and floods. In 2005, Kashmir was hit by an earthquake that took the lives of 70,000 people—one of two disasters that inspired Wani to pursue his graduate level studies in earthquake engineering research at Stanford University. On another occasion in 2014, a large flood engulfed the state of Kashmir while Wani was visiting his parents—a disaster that left eighty percent of Kashmir underwater in a few short minutes. According to Wani, people had to resort to camping out on their rooftops for up to a week without food and clean water while waiting for uncertain rescue by ad hoc response teams.

The infographic below demonstrates the detrimental impact that various natural disasters have on communities in which they occur:

Although Wani is cognizant that his experiences occurred in a developing country, people in both developing and developed countries experience the same difficulty and chaos in the event of a natural disaster. Wani is trying to us his experiences to solve the problem of post-disaster reconnaissance and rescue through artificial intelligence with the intent of saving lives and strengthening communities. Indeed, by using their core product and web platform, “Seismic Concern,” the company is able to alert those located in jurisdictions affected by an earthquake by displaying a color-coded map of the likely structural damage as well as alerting emergency operation centers, which allows them to allocate their limited resources to rescue and recovery. Seismic Concern not only fosters response prioritization, but also recovery operations such as material staging and shelter management by compiling an Initial Damage Estimate (IDE), which is critical for emergency operation centers to request financial assistance from state and federal level institutions.

Furthermore, One Concern is using state-of-the-art machine learning algorithms, stochastic modeling, training modules, as well as geophysical and seismological research to enable emergency operation centers to train based on actual earthquake simulations before an actual earthquake strikes. According to One Concern, this can aid in personnel readiness and planning development, thus making a community more proactive and resilient.

For now, One Concern is relatively unknown to cities and countries that may be interested in adopting the revolutionary technology in which it specializes. Fortunately, Wani’s company is in the business of being ready and able to respond to anything at any time—an industry that spans the globe. By empowering rescuers and first responders with such valuable resources in times of crisis, they will be equipped with the resources necessary to save lives.

Eye-phone: A technology that powers the blind

In the past, visually impaired people had to shell out thousands of dollars for technology that magnified their computer screens, spoke navigation directions, identified their money and recognized the color of their clothes. Today, users only need smartphones and a handful of apps and accessories to help them get through their physical and online worlds. New software is helping people with limited or no sight navigate around town and across the Internet.

Luis Perez carefully frames his photo to get the best shot for Instagram. Gripping his white cane in one hand and his iPhone in the other, Perez squints at the screen and points the display toward the sunset. His iPhone speaks: “One face. Small face. Face near top left edge.”

Perez snaps several photos and then puts his iPhone back in his pocket, with plans to examine the images later. Taking sunset pictures with an iPhone is nothing remarkable — until you consider that Perez, a 44-year-old who lives in St. Petersburg, Florida, is legally blind. Not being able to clearly see the photos he’s taking doesn’t slow him down. By using technology built into the iPhone, along with apps from the App Store, Perez has developed quite a photography habit.

“My time with vision is limited,” says Perez, who began losing his sight about 15 years ago from retinitis pigmentosa, a genetic eye disease. He now sees only a small circle of what’s directly in front of him, and that will deteriorate over the next few years. “I have to enjoy it as much as I can, and photography is part of that.”

VoiceOver, the screen-reading technology powers this technology.

VoiceOver first turns off the iPhone’s single-tap function on the display. After that, users can move their fingers across the screen to hear what’s on the display. That could be anything from the names of the apps themselves to words in an email, a text message or a social media post. When users turn on the “Speak Hints” function, VoiceOver will say what an app is and then give instructions for using it. Users can even adjust the voice’s speaking rate and pitch.

Lay of the land

By itself, VoiceOver makes it easier for people with limited sight to use their iPhones. But the technology really comes into its own when mobile apps hook into its features. BlindSquare, which talks to users as they walk along crowded city streets and inside busy shopping malls, is a great example.

In addition to VoiceOver, the mobile app taps into the iPhone’s built-in GPS, FourSquare — which knows local landmarks and surrounding areas — and a crowdsourced map of the world. That combination allows BlindSquare to speak names of landmarks, such as cafes, shops and libraries, as the user walks by. Shaking the iPhone prompts BlindSquare to say the current address and nearest intersection. It will even, for example, tell the user that the entry to her destination has “four doors, two of which are automated, and there’s a second set of doors after the vestibule.”

“Twenty years ago, there’s no way we’d be able to walk on our own to find a restaurant,” says Kevin Satizabal, a blind musician and an online communities assistant for the Royal London Society for Blind People. “That’s the great thing about technology. It’s letting people blend in and do everyday tasks with a lot greater ease.”

 

This is the best time in history to be blind. – Luis Perez

Voice Dream reads out text from Web pages, PDFs, PowerPoint presentations and other files. The Be My Eyes app lets blind users video-chat with sighted volunteers for things like distinguishing between two cans of soup. KNFB Reader pulls text from photos taken with the iPhone.

But it’s not just purpose-built apps for the blind that tap into the iPhone’s assistive technology. Many people say some mainstream apps, such as Twitter and Periscope for social media and Uber and Lyft for ride-booking services, have well-designed accessibility, too.

“What I really get excited about are all these mainstream apps,” says Blanks. “That’s what really makes me feel part of society.” Blanks’ sentiment would likely have pleased Apple’s late co-founder, Steve Jobs, who famously said “it just works” when talking about his company’s products.

“We consider accessibility an integral part of what we build into our technology, not an add-on,” says Sarah Herrlinger, Apple’s senior manager for global accessibility policy and initiatives. “It’s a basic human right.”

Almost there

Apple’s device isn’t the only smartphone to have accessibility features. Google’s Android software also has text-to-speech and screen-reading features for phone makers to use. Microsoft, working with Guide Dogs UK, has developed a wearable system that creates a “3D soundscape” similar to BlindSquare.

But not all apps are created equal. Some lose their assistive benefits after being updated. Others add the features as an afterthought, instead of from the get-go.

Lisamaria Martinez, a blind woman who lives in Union City, California, likes a parenting app that explains her baby’s milestones. But the app presents the information in an image of text, not text on its own. That means VoiceOver doesn’t work. To get around it, Martinez takes a screenshot of the images, uses another app to pull the text out of the image and then translates the text into speech.

“It’s super annoying,” says Martinez, who works with Blanks at LightHouse. “The problem is people don’t think about accessibility from the design stage.” That’s what LightHouse and other advocacy groups want to change. “With the right support, we can do a lot of things that people didn’t think we could do,” says Perez, the avid photographer who also teaches people to use technology.

 


Shara Tibken. “Seeing Eye phone: Giving independence to the blind– c|net.”

c|net. N.p., Web. 25 Mar. 2016.

How Telework May be Bad for Business

Living in the information age and the world of technology provides employees with 24-hour access to work-related material and information, which is often convenient when working outside of the office, working after office hours, or working from home entirely. Though this convenience can help business staff to be available and responsive at any time, telework may be bad for business. Indeed, the National Institute of Standards and Technology (NIST) suggests that employees who access work content on their personal computers, smartphones, and tablet devices may make companies more vulnerable to hackers and breaches in network security because attackers are more able to steal confidential information from a network by first hacking devices used for telework as opposed to technologies accessed from inside the organization.

For several business organizations, their employees, contractors, business partners, vendors, and other users may find it more convenient and more preferable to work from home for a plethora of reasons. With this in mind, NIST is currently drafting new security related recommendations for both businesses and employees, which include suggestions to create separate and external networks for personal devices. Furthermore, NIST suggests organizations that already have agreements with employees and third parties requiring client devices to be secure generally fail to account for potential use of unsecured, malware-infected, and/or otherwise compromised devices may already be connected to confidential company-related material (NIST).

One draft of a March 2016 publication by NIST made a rather pragmatic suggestion to “plan their remote access security on the assumption that the networks between the telework client device and the organization cannot be trusted” (United States Department of Commerce). Businesses and organizations are urged to heed these suggestions because having a secure network is just as imperative as it is for employees to be productive as they engage in telework.

The infographic below highlights the growing trend and benefits of teleworking among various companies around the globe. While employee productivity may be higher and even preferred for individuals who engage in telework, NIST evidence suggests that companies have thus far been reluctant to insure critical network security as a necessary precaution for those who perform telework.

While the agency is collecting public comment on its drafts until April 15 2016, it currently recommends that employees practice network safety by creating unique security access codes and passwords for personal devices, setting automatic locks when devices are idle, and disabling Bluetooth and Near Field Communication features except when necessary in order to protect their organization’s network security and overall bottom-line (www.nextgov.com).

Ear authentication; a new security recipe?

We’ve had our fingers, voices and irises scanned, but there’s now a new biometric en vogue – ears.

NEC, the inventor of this new personal identification technology, says it has an accuracy rating of 99%.

It measures the unique effect your ears have on sound. By identifying how sound resonation is changed by the unique pattern of each person’s ears, security systems can now distinguish accurately between millions of individuals.

In case you’re wondering what the effect of modifications to your ear shape are, don’t worry. Those over sized ear rings and studs and the severe boxing ring pummelings you’ve imposed on yourself won’t affect the accuracy of the system. The new system works by measuring how sound is determined by the shape of human ear cavities to distinguish individuals.

The advantages of the new system are that it is more natural. It does not require particular actions, such as scanning a part of the body over an authentication device, which makes it easier to conduct continuous authentication, according to a statement from Shigeki Yamagata, general manager, Information and Media Processing Laboratories, NEC Corporation.

The system works everywhere, even when the user is moving and working.

For those not already sold on the idea, here’s the technical details of how it works. For a few hundred milliseconds, an earphone with a built-in microphone generates acoustic signals from the earphone speaker.

It then receives the signals transmitted within the ear through the microphone. During this process, the soundwaves transmitted are changed by the time they are received back. This varies from ear to ear. The data on the measurement of those changes created by each ear gives every person their unique digital signature.

The change measurement is made using a synchronous addition method, which adds and obtains the average of the waveforms of the multiple signals received. This is used to eliminate noise from the received signals. The system then calculates how the sound resonates within the ear – i.e. the acoustics of each ear.

All this happens within a second.

NEC tests have shown that there are two main sets of sound data that can be used for recognition. Firstly, there are the signal components that travel through the external ear canal and are reflected by the tympanic membrane. Secondly, there are signal components that pass through the tympanic membrane and are reflected within the inner parts of the ear.

NEC plans to commercialise the technology around 2018.

A wide range of applications is planned, including fraud and identity theft prevention. It will help to secure critical infrastructure and take the risk out of wireless communications and telephone calls, NEC says.

Editor’s Note: Ideas inspired from;


Nick Booth. “Forget fingerprints, ears are so next season in biometrics– NakedSecurity by Sophos.”

N.p., Web. 10 Mar. 2016.