ENERGY | WIRELESS | NANOTECH | MEMS | OPTICS | QUANTUM | 3D | CHIPS | ALGORITHMS

Friday, July 29, 2011

#ALGORITHMS: "Gig.U to Extend LAN Speed Among Schools"


Universities nationwide are banding together to extend the gigabit per second speeds typical of modern local-area networks to long-distance connections among signatory educational institutions.

Many modern PCs offer gigabit per second local-area network speeds to those willing to upgrade their switches and cables from the 10/100 megabits per second, which most people get by with on their LANs. Gig.U, on the other hand, plans to promote gigabit per second speeds over long-distance connections between signatory universities.

Gigabit per second speeds perform between 10 and 100 times faster than the typical 100- and 10-megabit per second local area network, respectively. But the difference is only a matter of seconds for the small files typically transferred by most home users. For real-time gaming, however, almost no time-lag will be perceived when gamers run on a LAN. However, extending even 100M bps LAN speeds to a broadband connection among remote gamers can cost $100 to $200 per month from Internet Service Providers.

Universities, on the other hand, routinely transfer large files for scientific research, making gigabit per second connections a real time-saver. And like the network gamer, simulations and visualizations of scientific data can be performed remotely if a user's terminal has access to gigabit per second connections.

Universities nationwide have signed onto Gig.U by soliciting ideas for how to extend gigabit per second local-area network speeds over long-distance connections.
Many technical universities have already extended such high-speed LAN connections among themselves, such as MIT's 10G bps Regional Optical Network that operates on the eastern seaboard. Stanford University has a Google-sponsored pilot program that extends gigabit per second LAN speeds to nearby residents there, and Case Western--a Gig.U signatory--has a similar pilot program in Cleveland.

The other 28 universities backing Gig.U, however, want to level the playing field by providing such high-speed connections among even remotely located universities, from Alaska to Hawaii. The 29 universities include: Arizona State, Case Western, Colorado State, Duke, George Mason, Howard, Indiana, Michigan State, North Carolina State, Penn State, Virginia Tech, Wake Forest, West Virginia and the Universities of Alaska, Chicago, Florida, Hawaii, Illinois, Kentucky, Louisville, Maryland, Michigan, Missouri, Montana, New Mexico, North Carolina, South Florida, Virginia and Washington.

The Gig.U effort will likely take several years to develop, since the current stage of the program merely has universities circulating a request-for-information (RFI) from interested parties about how to provide gigabit-per-second to remote universities in an economical manner. The idea is to solicit partnerships with industry and other remote users who also need high-speed connections, such as health-care institutions, in order to come up with creative methods of financing long-distance high-speed networks. If workable solutions are forthcoming, the next step will be request-for-proposals (RFPs) about how to actually implement the solutions proposed by the RFIs.

Initiated by The Aspen Institute's Communications and Society Program, the effort will be directed by Blair Levin, former Executive Director of the National Broadband Plan and a Fellow at the Aspen Institute.

Further Reading

Thursday, July 28, 2011

#3D: "Smarter 3D Employs Eye Tracking"


LG claims to have solved the problems with glasses-free 3D displays with a smarter eye-tracking technology that adapts to the viewing angle of the user. Available this fall in a 20-inch monitor, the smarter 3D solution eliminates the need for glasses and is compatible with normal 2D imagery.

Glasses-free 3D displays, called auto-stereoscopic displays, have in the past depended on lenticular lenses that fit over displays to divert separate images to the right and left eyes. Unfortunately, lenticular lenses scramble regular 2D images. Parallax barriers are an alternative to lenticular lenses, but they do not provide a wide variety of viewing angles. LG has solved this last remaining problem with smarter eye-tracking technology that adapts the parallax barrier as the viewer's head moves.

Parallax barriers work similarly to lenticular lenses in that they divide an image into alternating stripes for the left- and right-eye. However, instead of using a lens to bend the alternating stripes to the correct eye, a parallax barrier uses a second layer of alternating clear and black lines to block the left-eye stripes from reaching the right eye, and vice versa. The brain fills in the gaps between stripes, resulting in completely separate images reaching the correct eyes without the need for lenticular lenses or 3D glasses.

Unfortunately, the tiny parallax-barrier stripes must be precisely located in order to correctly block the right-eye stripes form the left eye and vice versa. Thus in the past, parallax-barrier displays have only been successful with smaller screens like cell phones where users keep their heads centered on the screen anyway. However, larger displays did not work well with parallax barriers, because users are much more likely to move their heads out of the correct central location.
LG's solution was to add an eye-tracking camera that adjusts the parallax barrier so that it reliably diverts the left- and right-eye stripes to the correct eye even for users moving their heads.

The application of LG's smarter glasses-free 3D technology will be in its new 20-inch wide D-2000 display. The only major limitation is that it can only track the location of a single user's eyes. Thus the technology is not appropriate for TVs where multiple users may be watching from different angles. However, for single users, LG's smarter eye-tracking technology solves all the problems associated with auto-stereoscopic 3D displays and is suitable for still images, movies and gaming. The display also has a built-in conversion capability that adds some degree of 3D to 2D movies and games (but not still images).
Further Reading

Wednesday, July 27, 2011

#ALGORITHMS: "Seven Reasons HTML5 Is Killing Flash"


Seven inherent features in HTML5 are eliminating the need for Adobe Flash as well as enabling Websites to add capabilities that rival native applications. The fifth generation of the hypertext markup language--HTML5--is adding capabilities once reserved for plug-ins and native applications. Test your browser's HTML5 compatibility by visiting The HTML5 Test site.

Today, more than 109 million mobile users have HTML5-ready browsers, but by 2016, more than 2.1 billion mobile users will be compatible, according to ABI Research. There remain significant details to work out, which could delay the final specification until as late as 2020. However, long before then, HTML5 features will be available that virtually obsolete Flash, as well as enable normal Website developers to create universal applications that run on any platform.

According to ABI Research senior analyst, Mark Beccue, there will be 25 key HTML5 features available to most users over the next three to five years. Smarter Technology recently asked Beccue to pick the seven top features out of those 25 that will be most important to users. Here is his list:

#1-Video Play: The No. 1 reason to use Flash today is to play video, but HTML5 includes a tag for videos that allows it to play with all the start, stop, pause and other features people already expect from YouTube and other Flash-based sites.

#2-Video Record: This feature is not a common today, but will become increasingly important, according to Beccue, because virtually all new mobile devices with Web access also have video cameras.

#3-Audio Play/Record: Today, you need Flash, Quicktime or Java to play or record audio, but with HTML5 it is just another tag.

#4-Apps: HTML5 allows Web pages to directly access the same routines that make browsers work, enabling them to run free standing just like an ordinary application. (Try this feature out by downloading the Financial Times app.)

#5-Rich 2D Graphics: All types of sophisticated two-dimensional graphics will be built-into HTML5, enabling faster operation and nearly instant loading compared with the slow loading time for sites using rich graphics today.

#6-IM: Instant messaging will be built into HTML5 by virtue of Web sockets--which, once opened between two users, will allow instantaneous communications. (Try this out by running this Websocket demo in two windows.)

#7-Real-Time Streams: Websockets will also allow any Web-page designer to easily add real-time data streams--from stock market prices to surf reports--which today requires complicated programming outside the capabilities of most HTML coders.
Further Reading

Tuesday, July 26, 2011

#ENERGY: "U.S. Renewables Outpace Nuclear Power"


The U.S. Energy Information Administration reports that renewable energy sources in total have surpassed nuclear power, and are likely to widen the gap unless new nuclear plants are built.

U.S production of energy from renewable sources recently passed that from nuclear reactors despite administration efforts to revitalize U.S. nuclear power generation with federal loan guarantees for constructing new nuclear reactors.

In his 2011 State of the Union speech, President Obama said, "We need more production, more efficiency, more incentives. That means building a new generation of safe, clean nuclear power plants in this country."
Obama's attempt to paint nuclear energy green, however, was before the nuclear disaster in Japan, which has prompted nations worldwide to back away from nukes, including Germany which has pledged to concentrate on renewable energy and shut down all its nuclear power plants by 2022.

The Obama Administration, on the other hand, is currently proposing to add $36 billion to the current $18.5 billion in loan guarantees for new nuclear power plant construction in its FY2011 budget, bringing the total to $54 billion—nearly tripling the money currently available for new nuclear reactors.

Renewable energy passed that of nuclear power in March 2011 (in quadrillion BTUs). (Source: Energy Information Administration)
Meanwhile the world continues to reel from the triple meltdown at the Fukushima-daichi nuclear complex in Japan. Efforts there have been plagued by problems and missteps.

In the wake of this continuing fiasco, one bright light shines--namely, that renewable energy sources have already passed nuclear power generation in the U.S. and are on-track to outpace oil too.

The U.S. Energy Information Administration reported that renewable energy sources--which include hydroelectric, solar, wind, geothermal, and bio-mass/fuels--were responsible for 0.805 quadrillion BTUs of energy, or about 17 percent of the total U.S. energy generation, in March 2011. Nuclear, on the other hand, provided 0.687 quadrillion BTUs, or about 14.5 percent, according to EIA estimates.

Comparing the entire first quarter of 2010 to 2011, renewable energy sources rose about 15 percent, according to the EIA, and compared with the first quarter of 2009, renewable energy rose over 25 percent, marking accelerated growth in 2011.
Further Reading

#ALGORITHMS: "Client-Based Virtualization Gains Momentum"



Server-hosted virtualization is the poster-child of cloud computing where thin clients share a single IT-managed image. However, client-hosted virtualization is gaining momentum by claiming to offer IT cost savings, simpler centralized management, plus an enriched user experience comparable to stand-alone PCs.

In the beginning, IT had to spend weeks setting up, provisioning, upgrading and maintaining fleets of PCs on each of their user's desks. Server-hosted virtualization was a major step forward by cutting those times down to days or in some cases just hours, plus the thin-clients were cheaper than putting a PC on every user's desk. Today, however, the cost of PCs has dropped to nearly match that of thin clients, plus the resultant client-hosted virtualization infrastructure cuts IT set-up, provisioning, upgrading and maintenance times down to minutes instead of hours or days.

Client-hosted virtualization for 1,000 users requires only a single server (compared with 20 for server-hosted virtualization), as well as laptop support and quicker recovery, provisioning, patching, setup and upgrades.

Client-hosted VDI (virtual desktop infrastructure) executes all application code on a desktop PC, which to the user appears to be a normal local operating system with all the advantages of high-speed execution for bandwidth-intensive applications like HD (high-definition) video, plus it works with all their local USB peripherals. In the background, however, their OS is really a copy of a centralized image managed by IT for up to 1,000 users from a single server, which can be located on-premises or in the cloud.

Server-hosted VDI, on the other hand, executes code on a server which can only handle about 50 users and must maintain not only the image, but must also house the user's data, requiring 20 to 25 times more IT hardware resources and up to 20 times the IT personnel over client-hosted VDI. In addition, local USB peripherals are difficult to install and using high-bandwidth media like HD video is nearly impossible to support for large numbers of users.

Intel, for one, is championing client-hosted virtualization as the wave of the future. Intel calls this intelligent desktop virtualization and is promising a new breed of multi-core processor to directly support client-hosted VDI. However, IT does not have to wait for these specialized processors to get in on this emerging trend, because many inexpensive desktop PCs today are already configurable for client-hosted virtualization from companies like Virtual Computer, RES Software, Wanova, MokaFive and Scense.

For example, Virtual Computer has been delivering its NxTop desktop-virtualization platform as an alternative to server-hosted VDI for 18 months. Recently, sales have skyrocketed fivefold, prompting Virtual Computer to announce this month a new Global Partner Program to keep up with demand.

"Client-hosting greatly reduces the cost of virtualization; plus, it eliminates the complexity and many of the limitations inherent in server-hosted VDI," said John Glendenning, senior vice president of worldwide sales and business development for Virtual Computer. "And with the new Global Partner Program, we have created a channel for IT that should greatly simplify implementing client-hosted virtualization for businesses of nearly any size."
Further Reading

Monday, July 25, 2011

#ROBOTICS: "Smarter Robots to Inspect Aging Nukes"


With valves and pipes being allowed to leak up to 20 times their original limits by the Nuclear Regulatory Commission (NRC), according to a recent Associated Press report, smarter robotic inspectors are being proposed to detect underground leaks before they release radiation into groundwater.

More than three-quarters of aging U.S. nuclear reactors have leaked radioactive tritium from underground pipes at one time or another, according to the Associated Press, which recently concluded a year-long study. In response, Massachusetts Institute of Technology has designed a tennis-ball shaped robot to inspect underground pipes for cracks before they can contaminate groundwater.

Since the earliest nuclear reactors were built in the 1960s, more than 400 accidental radioactive leaks have been reported, according to the AP study, which maintains that over and over again, the safety standards set by the Nuclear Regulatory Commission (NRC) have been relaxed in order to allow reactors to continue operating, rather than making the costly repairs to bring reactors back into compliance. As a result, the danger of radioactive contamination of groundwater is increasing, according to MIT.
"We have 104 reactors in this country [and] 52 of them are 30 years or older," said MIT professor Harry Asada, director of MIT’s d’Arbeloff Laboratory for Information Systems and Technology. "We need immediate solutions to assure the safe operations of these reactors."

A spherical robot equipped with a camera is being proposed to navigate underground pipes at nuclear reactors to locate potential cracks that could leak radioactive water. (Source: MIT)

Groundwater contamination has already occurred at many nuclear reactor sites, where underground pipes carry water to cool reactor vessels, leaching radioactive tritium into the soil around them. Unfortunately, the U.S. Government Accountability Office reports that the industry has only very limited methods to monitor these underground pipes for leaks. Instead of direct inspections, today nuclear engineers depend on ultrasonic waves to reveal cracks, and induced voltage gradients to measure corrosion on pipe coatings. If any anomalies are detected, the only way to inspect the pipes now is to dig them up.

The new spherical robot designed by MIT aims to provide a way to directly inspect these underground pipes without digging, namely by navigating them from the inside to check of cracks and signs of corrosion. The tennis-ball shaped robot, designed by Asada and colleagues at MIT's d’Arbeloff Laboratory, can withstand the extreme radioactivity within the pipes and carries a video camera and transmitter to send back images that can be inspected by engineers to determine their condition.

To insure that the robot could navigate the pipes without getting stuck inside, its design was made free from external appendages or propellers. Instead, the smarter propulsion method harnesses the pressure of the water flowing through the reactor, re-channeling it to provide thrust. Tiny channels and valves within the skin of the robot switch the direction of liquid flow internally so that it is emitted as a jet on the opposite side of the desired direction for the robot to move.

Though still in the prototype stage, the researchers are also working on a camera pan and tilt mechanism powered by internal water flows. The camera itself will be fixed in position, but an internal two-axis gimbal will rotate the sphere in place to any desired direction.
Further Reading

Friday, July 22, 2011

#ALGORITHMS: "Five Reasons IaaS Will Top $4 Billion by 2015"


Infrastructure as a service is one of the fastest growing paradigms in the portfolio of cloud-based services. Here are the five reasons why IaaS is set to grow to become a $4 billion market by 2015.

Just as software as a service shifted software from being an enterprise asset that is licensed to being a service that is provided, and platform as a service shifted software development and hosting capabilities for specific computer architectures from being an asset purchased en masse to being a service provided on demand, infrastructure as a service allows all enterprise hardware architectures to be virtualized, including processors, storage, firewalls and other network resources.

Besides the savings in capital expenditures associated with building large data centers, IaaS reduces the labor costs of maintaining 24/7 network administrators, and greatly reduces the energy power budget for an enterprise. IaaS is delivered by platform virtualization where servers, software, data center space, and network equipment are outsourced to cloud service providers.

According to In-Stat, IaaS is destined to grow quickly over the next four years, culminating in a $4 billion market by 2015, with the top five markets being hospitality, food, health care, social services and retail.

Here are the five reasons that IaaS is destined to skyrocket:
Cloud-bursting: Without IaaS, enterprises must invest in servers that run at 10 percent of capacity 90 percent of the time, just so they can handle the bursts in activity that occur only 10 percent of the time. By off-loading these bursts to cloud-based services, substantial savings in capital expenditures can be reaped by an enterprise.

Virtualization: Virtualization is the process of running low-level code beneath operating systems, rather than using separate load balancers as would otherwise be needed. Virtualization manages failover, redundancy, monitoring, clustering and other infrastructure management tasks.

Hypervisor Convenience: IaaS combines hardware and software resources by virtue of low-level code, called a hypervisor, that runs independent any chosen operating system. The convenience of the hypervisor involves its taking inventory of hardware resources and allocating them based on demand.

Resource Pooling: IaaS software works by running virtualization code--the hypervisor--to allocate hardware resources on-demand, thereby enabling resource pooling among departments of an enterprise.

Multi-tenant Computing: Because of the resource pooling made possible by virtualization of resources, organizations with similar interests in regard to security requirements and compliance can share resources, called multi-tenant computing.

These five areas will be the drivers that bring IaaS to new levels in the next few years.
Further Reading

Thursday, July 21, 2011

#BUSINESS; "Women Leaders Take Heart: Times Are Changing"


More androgynous and transformational views of leadership are opening opportunities for women in management, despite the persistent cultural perception that masculinity is required for strong leadership.

Although strong leadership continues to be perceived as an overtly masculine character trait, progress is being made toward a more androgynous view, especially for middle managers. In addition, a transformational leadership style, which offers mentoring and gender-neutral rewards as incentives, may enable women to sidestep stereotypical prejudice, according to a new study by researchers at Northwestern University.

This new study’s results counter past perceptions. In fact, more than 68 studies on how women are perceived in leadership roles found that overall they are still perceived as poor business leaders due to cultural stereotypes. Women are especially disadvantaged by a double-edged sword. Namely, that they are assumed to be poor leaders unless they prove otherwise by deeds, but that when they do demonstrate strong leadership they are perceived as "masculine" and presumptuous.

Fortunately, times are changing, especially when compared to historical studies from the 1960s when Betty Friedan published her ground-breaking book "The Feminine Mystique," which was one of the first public expressions of how women are discriminated against for leadership roles in the business place.

Even today, however, women are often assumed to be poor leaders--unless they prove themselves with acts of strong leadership. And even though many will perceive them as masculine and presumptuous as a result, adoption of a transformational leadership style including mentoring and the use of gender-neutral rewards as incentives, can often mitigate prejudice and result in managerial advancement.

Middle management, in particular, has become more tolerant of androgynous leadership roles which women often fit well. Qualities, such as assertiveness and competitiveness are still associated with both strong leadership and masculinity, but women who express them are not automatically disadvantaged as was often the case in prior years.

The Northwestern University study found a change in perception. "Cultural stereotypes can make it seem that women do not have what it takes for important leadership roles, thereby adding to the barriers that women encounter in attaining roles that yield substantial power and authority," said Northwestern professor Alice Eagly. Nevertheless, a distinct shift toward more androgynous leadership roles is spreading in different parts of the world and within subcultures here in the United States. For instance, educational institutions often view leadership less as a masculine quality.
In the study, the researchers conducted new meta-analysis using three separate paradigms to provide independent tests on the validity of perceived leadership stereotypes. These paradigms are think-manager/think-male, agency-communion, and the masculinity-femininity spectrum.

Eagly, a faculty fellow in the Institute for Policy Research at Northwestern, performed the meta-analysis with professors Anne Koenig at the University of San Diego, Abigail Mitchell at Nebraska Wesleyan University and Tiini Ristikari at the University of Tampere (Finland).

Further Reading

Wednesday, July 20, 2011

#ALGORITHMS: "Visualization Analytics Writ Large"


An 80 foot wide visualization display simultaneously tracks data streams from thousands of sensors to provide the real-time analytics necessary to balance a statewide electric grid, with telecommunications the next challenge.

In what it terms a breakthrough combination of stream computing and analytics, Space-Time Insight today announced the installation of a record-breaking 80-foot wide wall-of-visualizations designed to give operators of one of the world's largest electric grids a real-time readout that pinpoints "hotspots" using what it calls "geospatial memory."

"Our installation provides situation intelligence to California ISO," said vice president of Space-Time Insight, Steve Ehrlich. "We provide the monitoring capabilities that allow grid operators to identify, analyze, and act in real-time."

California ISO uses
new 80-foot wide visualization display driven by Space-Time Insight's analytics.

California ISO manages 80 percent of the state's grid for its local utilities over 25,000 miles of power lines that provide over 286 billion kilowatt-hours of electricity annually. The complexity and sheer magnitude of the data streams provided to operators had previously overwhelmed conventional computing techniques, according to Ehrlich, leading to a damage-control mentality.However, by visualizing the vast amounts of streaming sensor data in real-time, the 80-by-6.5 foot wall of geospatial-memory allows operators to pinpoint hotspots before they become problems--diverting resources to head-off potential catastrophes before the materialize.

"Operators were already overwhelmed by their traditional data streams and viewed the additional resolution provided by smart meters as a problem, but with our geospatial technology for analyzing real-time data we have been able to provide quick actionable information to operators," said Ehrlich.

Operators already had multiple large screen monitors on their desks, but can now divert the output from any of their displays to the 80-foot wide wall-of-visualizations for other operators to see. The 80-foot visualization display also constantly tracks the output from all the different generation sources--from conventional hydroelectric to solar- and wind-farms--displaying their varying real-time outputs as currents swirl, clouds pass over and breezes blow, respectively.

Applications simultaneously running include Market Intelligence, Grid Intelligence, Renewable Integration and Crisis Intelligence, detecting and predicting in real-time the economic efficiencies, maximizing grid reliability, optimizing power source utilization, as well as predicting the course of environmental anomalies such as brush fires. The environmental visualizations, for instance, integrate inputs from Cal Fire (California Department of Forestry and Fire Protection) showing areas already burned, with infrared sensors that show progress of fires still burning, with sensor data from wind-speed detectors to predicts the trajectory of fires and determine which transmission lines are at risk.

As a part of California ISO's Market Redesign and Technology Upgrade initiative, Space-Time Insight's geospatial algorithms provide both real-time and day-ahead predictions aimed at optimizing grid performance.
Space-Time Insights provides its geospatial visualizations by virtue of partnerships with Accenture, Cisco, IBM, Oracle and SAP.

The company is already working with the oil and gas industry to add real-time visualization and predictive analytics to their operations, but its next frontier is the telecommunications industry.

"Telecommunications is remarkably similar to power distribution," said Ehrlich. "Instead of households, the users are cell-phones subscribers and instead of power lines they have cell-phone towers and backhaul networks, but otherwise all our apps should translate over."
Further Reading

Tuesday, July 19, 2011

#SECURITY: "Did WWIII Already Start?"


U.S. Cyber Command will coordinate U.S. security efforts within each branch of the military, including the Army, Navy, Air Force, Marines and Coast Guard.

Last week the Pentagon detailed the most serious cyber-attack on U.S. national security to date. Was WWIII just declared?

Intruders crossed the line in March by stealing over 24,000 classified design documents from a government contractor, according to Pentagon disclosures last week. This promoted the U.S. Cyber Command to go on the offensive.

Cyber-space began as a way for citizens to "connect, socialize and organize themselves," according to the "Department of Defense Strategy for Operating in Cyberspace." Now, however, over 2 billion global users share cyber-space with over 15,000 U.S. Department of Defense networks and 7 million computing devices at hundreds of installations in dozens of countries worldwide, resulting in millions of daily probes and the theft of thousands of classified documents yearly.

In March, one of these groups went over a red line, spurring the DOD to announce to the world that it will henceforth retaliate with active systems that detect intruders and relentlessly track them down in cyber-space.
"Foreign intelligence organizations have already acquired the capacity to disrupt elements of DOD’s information infrastructure," according to the DOD "Strategy for Operating in Cyberspace." Henceforth, DOD will "organize, train and equip for cyber-space as we do in air, land, maritime and space to support national security interests [in which] a cornerstone of this activity will be the inclusion of cyber red teams throughout war games and exercises [to develop an] active cyber-defense capability to prevent intrusions onto DOD networks and systems."

While WWIII was not officially declared, the DOD has put foreign governments and civilians on notice, warning DOD insiders in particular that henceforth they will suffer the "imposition of higher costs for malicious activity." The DOD "Strategy for Operating in Cyberspace" openly endorsed integrated offensive operations meant to disrupt the planning and execution of planned attacks, including the use of honeypot code to circumvent anonymity in order to track down attackers and stop their activities. During war-game scenarios cyber-attackers whose physical location has been identified can thus be dealt with by conventional forces.

"Active cyber-defense is DOD’s synchronized, real-time capability to discover, detect, analyze and mitigate threats and vulnerabilities...it operates at network speeds by using sensors, software, and intelligence to detect and stop malicious activity before it can affect DOD networks and systems...these efforts will include development and integration in the areas of mobile media and secure cloud computing," according to the DOD document.

#CHIPS: "Ferroelectrics fabbed on plastic"

Ferroelectric memories, energy harvesting arrays, sensors and actuators could soon be fabricated on plastic substrates, according to researchers at the Georgia Institute of Technology, who recently demonstrated a new low-temperature process using an atomic-force microscope (AFM).

Georgia Tech postdoctoral fellow Suenne Kim (left) holds a sample of flexible polyimide substrate holding ferroelectric nanostructures, produced in the lab of professor Nazanin Bassiri-Gharb (center) while graduate assistant Yaser Bastani observes. (Credit: Gary Meek)
Further Reading

Monday, July 18, 2011

#ENERGY: "Car Battery in a Bottle"

A radically new approach to the design of batteries, developed by researchers at MIT, could provide a lightweight and inexpensive alternative to existing batteries for electric vehicles and the power grid. The technology could even make “refueling” such batteries as quick and easy as pumping gas into a conventional car.
Further Reading

Friday, July 15, 2011

#ALGORITHMS: "Smarter AIs Read the Manual"


MIT's AI read the manual to learn how to play "Civilization" game.
Artificial intelligence has made great strides in solving particular problems, such as IBM's Watson, which learned to beat humans at the game Jeopardy. Unfortunately, dozens of human experts needed to read and digest rules of the game in order to code them into algorithms. Now a group at MIT wants to change the hand coding of algorithms by allowing an AI to read and digest the rules all by itself.

Massachusetts Institute of Technology researchers are crafting the "last AI" for playing games against humans--a set of algorithms that can read the manual for a game and then produce code that allows it to square off against rivals and even learn as it goes along. If successful, the team wants to install its machine learning algorithms into robots that learn from the ultimate rulebook--directly from interactions with its environment.

Artificial intelligence has not even come close to the original vision of providing humans with tireless software assistants capable of helping us with ad hoc tasks. The dream has been realized for specific cases, such as IBM's Watson. Watson was handcrafted to play the TV game show Jeopardy, and is currently being used for mining the exabytes of data in medical databases. Similar AI's have also been handcrafted to beat humans at chess, poker and many other computer games.

Researchers are applying their new AI learning approach to Sid Meier's Civilization V.

Unfortunately, to perform in this manner, extensive hand-coding efforts must be undertaken, sometimes by legions of programmers. In these situations, the programmers must craft a set of algorithms that understand the rules of the game and have some way to develop strategies to win. Such efforts have been limited in the past. For example, chess games typically have "play the computer" modes, but these merely use brute-force look-ahead tables to make moves that are the most likely to succeed. Most other computer-modes in games use hand-coded strategies that human experts have encapsulated into rule-sets that are then applied when those circumstances are recognized.

Now researchers in MIT’s Computer Science and Artificial Intelligence Lab believe they have a better way: create an AI that can generate its own algorithms after reading the manual for a specific task. The group, led by professor Regina Barzilay, got started by crafting an AI that could read instructions for installing new software and then generate scripts that accomplished the task autonomously. Assisted by David Silver, Royal Society University Research Fellow at University College London, and MIT doctoral candidate S.R.K. Branavan, the group is now tackling board games.

The first game the group has taken on is Sid Meier's Civilization V in which players try to develop a city into an empire. Using their AI to read the Civilization manual and augment its more traditional machine learning algorithms, the MIT group's new system was able to increase its victory rate over humans to 79 percent (over a previous high of 46 percent without its manual-reading ability).

Next, the team is aiming to generalize their results for robots in unstructured environments. There, the robot will learn the meanings of words the same way that children do--by touching the hot stove and burning their plastic-covered metallic fingers--that is, through exploratory interactions with their environment. Funding was provided by the National Science Foundation.
Further Reading

Wednesday, July 13, 2011

#WIRELESS: "Smart Monitors Benefit from RF Energy Harvesting"


Georgia Tech professor Manos Tentzeris holds a sensor (left) and an ultra-broadband spiral antenna for wearable energy-scavenging applications, both of which were inkjet printed on paper. (Source: Georgia Tech)

Radio-frequency energy from television, AM, FM, cellular, WiFi, WiMax, Long-Term Evolution and more can now be harvested to power smarter electronic devices for security, environmental sensing, structural monitoring (bridges/buildings), food spoilage and wearable bio-monitoring devices. Much has been made of the radio-frequency waves saturating every cubic inch of air in developed nations, and especially in metropolitan centers where television, AM, FM, cellular, WiFi, WiMax, Long-Term Evolution and scientific/medical sub-bands compete. One research group at the Georgia Institute of Technology (Atlanta), however, sees RF saturation not as a potential health problem, but as an opportunity to scavenge power directly from the air.

Harvesting RF energy from the air and converting it into usable power for smarter electronic devices could enable always-on security devices that work even during power outages, environmental and structural sensors that do not require batteries, food-preparation sensors that monitor spoilage inside containers, and wearable biological sensors that can transmit vital signs to emergency responders even when the person wearing them is unconscious.

Other groups have demonstrated the ability to harvest ambient RF power using relatively expensive antennas tuned to specific frequencies, but Georgia Tech researchers now claim to have created a cheap ultra-wideband antenna technology that can scavenge a wide swath of frequencies simultaneously and allows their energy-harvesting devices to generate much more power than earlier attempts.
"Using an ultra-wideband antenna that lets us exploit a variety of signals in different frequency ranges is giving us greatly increased power-gathering capability," said Manos Tentzeris, a professor at Georgia Tech.

Since 2006, Tentzeris and colleagues have been experimenting with antenna-based energy-scavenging devices, but only in relatively narrow bands. Now, the team claims to have created ultra-wideband antennas and sensors that can be printed on paper or clear polymers using conductive inks with embedded silver nanoparticles, making their energy-harvesting technology commercially feasible.

In demonstrations, Tentzeris' team has created self-powered sensors for chemical, biological, heat and stress applications. The team has also created ultra-cheap RFID (radio-frequency identification) tags using their techniques, which could be used in manufacturing, shipping and for monitoring communications and power usage.

The system works by first converting alternating RF signals in to direct current that charges either a super-capacitor or a battery, which typically saves up enough energy to power on a device. Only microWatts of power can be generated in real time from a single RF frequency, but by using these new ultra-wideband antennas, milliwatts of power can be produced, enabling some low-power devices to be run directly. Eventually, the team hopes to power devices requiring as much as 50 milliwatts.
Funding was provided by the National Science Foundation, the Federal Highway Administration and Japan’s New Energy and Industrial Technology Development Organization.

Further Reading

#ALGORITHMS: "Apple Spin-Off Hosts Enterprise App Stores"



Apperian hosts app stores for Cisco, Procter & Gamble, Estee Lauder and other corporations that take advantage of this Apple spin-off's ultra-secure cloud-based provisioning service for employer-written private-label apps like those shown here.

Since Apple announced its over-the-air protocol last year, enterprises from Cisco to Procter & Gamble have quietly opened their own private-label app stores, with the help of behind-the-scenes Apple spin-off Apperian.

Apple's over-the-air protocol enables any enterprise to bypass iTunes and create its own private-label application store, with complete IT control of provisioning, with Apple spin-off Apperian Inc. providing the necessary cloud-based hosting services.

Apperian was spun off with Apple's blessing by its former head of enterprise services, Chuck Goldman, in 2009. With the release of Apple's over-the-air protocol in 2010, Apperian began hosting private-label application stores on its own cloud computers for any enterprise.

Unlike Antenna's HTML5 applications managed by its Volt-client and Antenna's Mobility Platform, the Apperian Client is the first native application to enable enterprises to create their own iTunes-like application store experience for their employees.

"Apperian's business model is to support pure native iOS apps--and soon Android too--which are developed not by us, but by our enterprise customers, and which we house is a cloud-based app store that employees can browse just like they do the iTunes app store," said Apperian CEO David Patrick.

Apperian's cloud platform, called EASE (Enterprise App Services Environment), runs as SaaS (software as a service) to deploy applications created by enterprises for employees on their smartphones and tablets. All the applications are under the control of the enterprise. IT managers use a dashboard to provision and track usage based on specific employee needs. Employees sign onto and authenticate at the company application store, located in Apperian's Ease cloud, then choose from among the applications that IT has authorized them to access. IT managers use Ease analytics to produce reports on usage and trends. An Apperian supplied software development kit and application templates simplify application development efforts for enterprise IT departments.

Apperian recently got the attention of Wall Street when its business model was endorsed by North Bridge Venture Partners, Bessemer Venture Partners and Kleiner Perkins' Caufield & Byers' iFund, which together put up $9.5 million to supplement Apperian's prior seed funding from CommonAngels and LaunchCapital. The endorsement of the venture capital leaders virtually ensures Apperian's deeper penetration into Fortune 500 companies wanting to open their own private-label application stores. Apperian customers already include Procter & Gamble, Dupont, AAA, Intuit, Estee Lauder, NetApp, the Mitre Corporation, and the world's largest enterprise application store, Cisco's App Fridge (for cool enterprise applications).

Further Reading: URL

#ROBOTICS: "Robots get Kinect's 'eyes and ears'"


Microsoft Robotics has been giving away its free Robotics Developer Studio, complete with a 3D simulator, for the last six years, but without gaining much visibility. Microsoft, however, is convinced that will change Wednesday (July 13th) when the company launches added services that allow users to plug the Kinect hands-free hardware--intended for gesture control of its Xbox gaming console--directly into any robot.
Further Reading: URL

Tuesday, July 12, 2011

#ALGORITHMS: "Desktop Virtualization Growing to $5B by 2016"


Virtualization will seep into every aspect of business computing over the next five years, growing tenfold as recession cost-cutting fades and mobilization of the workforces increase. Virtualizationwas perceived as useful, but expensive during the cost-cutting era before the recessionbegan to fade circa 2009, when just $500 million was invested in it. During therecession, it was still cheaper to tether workers to their desks and force themto work with inexpensive generic PCs. Now, however, businesses are jumping onthe visualization bandwagon not only to unshackle workers from their desktopPCs, but also to sidestep the security risks of mobile devices overapplications customized by IT for thin clients, tablets and smartphones.

According to ABI Research, the worldwide market for hosted VDI (virtual desktop infrastructure) will grow tenfold from 2009 levels to top $5 billion by 2016, with North America and Europe accounting for the bulk of the new installations.

Virtualization enables business partners to provide more comprehensive solutions to new and existing customers, as well as allowing companies to optimize IT investments, infrastructure and resources.

ABI Research's study, entitled "Desktop Virtualization: The Global Market for Virtualized Business Desktop PCs," concentrates on virtualization of desktop PCs, where their operating system, applications and databases are relocated to cloud computers, which then remotely deliver the same desktop experience as before to a secure laptop, netbook or other thin client, tablet or smartphone. Leading the charge in this category, according to ABI Research is Norton Ghost, Citrix XenDesktop and VMware View. Each of these solutions meets the demands of today's mobile workforce while ensuring high security that would be difficult to match with applications custom-made by IT.

"The VDI market will exhibit impressive growth in the next five years," said Larry Fisher, director, automotive, energy and emerging technologies, at ABI Research. "Buyers will principally consist of large enterprises looking to reduce their desktop support and management costs, and companies and organizations that need to lock data in the data center, either for compliance or security reasons, [allowing] the IT department to integrate a wide range of devices into corporate networks with relative ease [and providing] full corporate desktops through iPads, smartphones and other popular devices."

Other benefits include enhanced business continuity, lower overall energy expenditures for the smaller clients over desktop PCs, and enhanced recovery capabilities after disasters. High-end virtualization and cloud-computing providers like IBM cite four main reasons to adopt virtualization now: consolidation of resources to improve efficiency and business agility; easier management of variable workloads; the ability to automate processes to reduce management costs and provide more consistency; plus the ability to optimize delivery of services for faster responses to changing circumstances.

Further Reading

Monday, July 11, 2011

#CHIPS: "Freescale processors gain on-chip e-reader"


Earlier this year, Freescale Semiconductor Inc. announced the first line of processors designed to power sub-$99 e-readers. Now it has extended that line downward with integrated E-Ink driver circuitry for low-end devices from medical and home/office automation to watches whose face is an electronic paper display (EPD).
Further Reading

#ALGORITHMS: "Eight Touch-Screen Gestures That Increase Comprehension"


In the age of information overload, business agility requires optimal knowledge-acquisition techniques that psychologists call "active learning." These eight touch-screen gestures optimize active-learning techniques for electronic media and will be available in an app this fall The Georgia Institute of Technology has modernized traditional active-learning techniques for touch-screens by crafting a set of gestures for electronic media--including highlighting, commenting, extracting, collapsing, magnifying, linking, outlining and bookmarking.
Further Reading

Thursday, July 07, 2011

#ALGORITHMS: "CIOs Say Business Analytics Is Driving Growth"


Further Reading

An IBM poll of 3,000 CIOs reveals that business analytics, mobility solutions and virtualization are the top three priorities for increasing their enterprise's competitiveness.

A CIO survey reveals that harnessing business intelligence and analytics is by far the CIO's top priority for increasing competitiveness over the next three to five years. IBM surveys have polled the opinions of 13,000 CIOs worldwide over the last six years, revealing in their latest study that business analytics remains the top priority in enhancing the competitiveness of their enterprises.

"CIOs must understand the needs and goals of their organizations, agencies, divisions or business units, and deliver on their unique mandates," said the report. "The biennial IBM Global Chief Information Officer Study, reflects face-to-face conversations with over 3,000 CIOs from organizations of every size, sector and region."

A 76-page summary of the report's findings, entitled "The Essential CIO: Insights from the Global Chief Information Officer Study," is available free as a PDF.
According to the study, the top strategic technology investment CIOs recommend to enhance their enterprises' competitiveness over the next five years is business analytics, followed by mobility solutions and virtualization. Cloud computing, a distant concern for CIOs during the last iteration of the study in 2009, rose to tie for the fourth slot in 2011.

"To increase competitiveness, 83 percent of CIOs have visionary plans that include business intelligence and analytics, followed by mobility solutions (74 percent) and virtualization (68 percent). Since our 2009 Global CIO Study, cloud computing shot up in priority, selected by 45 percent more CIOs than before and leaping into a tie for fourth place with business process management (60 percent each)," said the study.
CIOs worldwide consider business intelligence and analytics the best methods of mining the nuggets of wisdom in their expanding data warehouses.

CEOs, in particular, are increasingly depending on the business intelligence and analytics managed by their CIOs as the best means of turning the raw data about their enterprise's performance into usable information resulting in intelligence CEOs can use to make informed decisions.

CIOs in midsized organizations deemed mobility solutions particularly important, according to a subset study, entitled "The Essential CIO: Midmarket CIO Study 2011."
"Trends such as the explosion of mobile shopping, the proliferation of smart phones, and the drive to better harness data through the social Web to drive results have transformed the way midsize firms do business," said that report.

CIOs with a mandate from their CEOs to expand their operations were more than double the size of any other group identified in the report (except for the financial-markets sector, which were predominantly operating under a "pioneer" mandate).
"Simply put, [expand] is what organizations today are asking for most often from their CIOs," said the report.

Beside business intelligence and analytics, CIOs at midsize enterprises with a mandate to expand identified process and product simplification as the drivers by which they planned to demonstrate fulfillment of their mandate. Nearly all CIOs in this group--98 percent--said they aimed to simplify key internal processes and 92 percent said they would drive better real-time decision making with business intelligence and analytics. Pioneer-mandate CIOs in midsized enterprises identified product/service profitability analysis as their top priority for turning raw data into actionable business intelligence.


Wednesday, July 06, 2011

#CHIPS: "Smarter Phase Change Memory Beats Flash"





Using a variety of solid-state memory technologies, IBM Research recently demonstrated a flash substitute that is 100 times faster and will last 3,000 times longer than the flash memory, which has replaced hard disks in everything from smartphones and touch-screen tablets to high-end enterprise servers and cloud-based data centers. Solid-state flash memory has largely replaced mechanical hard disks in mobile devices and even in high-performance server farms such as those at Answer.com, but now IBM has demonstrated a new, faster alternative to flash that lasts longer and can store four times as much information per bit-cell.

Flash memory has almost driven the hard drive makers out of business, with only two major manufacturers left, Seagate and Western Digital. However, hard drives have been hanging on because of the longevity of their spinning metal disks, which experience no wear since their read-write heads never touch the disk's surface. Flash memory, on the other hand, is faster to access, more compact and much lower power. Unfortunately, flash memory bit-cells wear out in as few as 3,000 cycles, making them suitable only for relatively short-lived mobile consumer devices (although server farms use them by monitoring their wear and implementing complex algorithms to map around bad cells until a point of diminishing returns when they have to be replaced.)

IBM's phase change memory test chip was fabricated in its 90-nanometer complementary metal oxide semiconductor (CMOS) process.

Now IBM scientists have demonstrated a solid-state alternative to flash memory that solves most of its problems, plus offers a clear migration path to higher performance, higher density and greatly expanded longevity. The key to this breakthrough memory technology--called phase-change memory--is its ability to store up to four-bits per memory cell. Flash memories have likewise, of late, started storing multiple bits per cell, albeit without solving its short lifetime problem.

"As organizations and consumers increasingly embrace cloud-computing models and services, whereby most of the data is stored and processed in the cloud, ever more powerful and efficient, yet affordable storage technologies are needed," states Dr. Haris Pozidis, Manager of Memory and Probe Technologies at IBM Research – Zurich. "By demonstrating a multibit phase-change memory technology which achieves for the first time reliability levels akin to those required for enterprise applications, we made a big step towards enabling practical memory devices based on multi-bit PCM."

IBM's innovation involves several architectural breakthroughs which together solve all the outstanding problems with flash while boosting its performance by 100-times. Phase change memories work in a manner similar to rewritable optical media, that is by heating and cooling a polymer so that it can take on two states--either crystalline (transparent) or amorphous (opaque). The difference with solid-state phase change memories is that these changes are performed electrically rather than with a laser as with optical disks.

By enlisting not only the phase change from crystalline to amorphous, but also the analog change in resistance of the electrodes between which the memory cells are sandwiched, IBM was able to encode four-bits per cell.

"We apply a voltage pulse based on the deviation from the desired level and then measure the resistance. If the desired level of resistance is not achieved, we apply another voltage pulse and measure again--until we achieve the exact level,” explains Pozidis.

Through this iterative process, the IBM scientists were able to demonstrate the longevity of their technique while simultaneously showing that even in the worst case access time will be no worse than 10 microseconds, which is still 100-times faster than flash.
Further Reading