Tuesday, May 31, 2011
#ALGORITHMS: "Cloud Computing Expands Reach with iDataCenters"
Around the world, data centers are popping up to service the continued expansion of cloud services into mainstream computing. Following the lead of Google, IBM, Amazon and others, Apple continues to prep massive data centers east and west for its new thrust into cloud computing services. The debut of Apple's iDataCenters and its new cloud computing services, rumored to debut soon, will include the rollout of its iCloud.com domain.
The first cloud service to be offered by Apple will likely be dedicated to streaming music to iPads, iPhones, iPods and even PCs running iTunes—but that is just the start. Apple is also planning to retool its MobileMe service to provide a "storage locker" in the clouds not only for music, but for any file—enabling its iOS-based devices to store massive amounts of personal and corporate data that can be quickly accessed from the clouds as if it were stored locally.
The eastern Apple iDataCenter (not to be confused with the data center management app by the same name) houses a massive half million square feet of server farm and support facility space in rural North Carolina. Apple’s eagerly awaited cloud music service will most likely be streaming music from this new data center in Maiden, N.C. The iDataCenter was personally welcomed to the state by Gov. Bev Perdue, who signed an agreement that gives Apple a state tax credit worth $46 million.
\
The sprawling iDataCenter in North Carolina is rumored to be setting Apple back a cool billion dollars, and will draw an astounding 100 megawatts of power, which had to be negotiated with Duke Energy Corp. (Charlotte). The installation will boost Apple's cloud computing capabilities with five times more throughput than its existing 100,000-square-foot data center in Newark, Calif.
Apple has already reached agreements with EMI Group, Sony and Warner Music Group regarding its plans to stream audio to mobile device users who own those songs and which have been authorized to stream them. The only remaining holdout is Universal Music Group—the last member of the big four—which is expected to sign on before Apple unveils iCloud.com at its Worldwide Developers Conference next week (June 6-10 in San Francisco).
Apple is also planning a new data center in Santa Clara, Calif., which is being built by DuPont Fabros Technology—a developer of wholesale data center space. This smaller 11,000-square-foot facility consumes 2.3 megawatts. Apple has signed a seven-year lease on the space, which is near the company’s Cupertino headquarters. The new western iDataCenter is rumored to be housing a reinvented MobileMe service retooled to offer storage locker space for cloud computing tasks.
Further Reading
Thursday, May 26, 2011
#MEMS: "High-temp MEMS goes seismic"
Analog Devices Inc. is offering a dual-axis accelerometer capable of withstanding up 175 degrees Celsius (342 degrees Fahrenheit) for ruggedized industrial applications. The device is based on what ADI calls the world's first high-temperature micro-electro-mechanical system (MEMS) technology. Today, applications using an accelerometer in a high-temperature environment, such as tools used in geological down-hole measurements, require complex compensation circuitry to ensure that readings are not skewed by temperature. The new iMEMS ADXL206, on the other hand, has virtually no quantization errors or other non-monotonic behaviors over its entire operating range, from -40 degrees to +175 degrees Celsius, according to ADI (Norwood, Mass.)
Further Reading
#ALGORITHMS: "Smarter Conservation from Analytics and Cloud Computing"
The Water Pilot Study displayed analytics about each household’s usage patterns, comparing them to average usage and to other engaged members of the program. (Source: IBM, City of Dubuque)
Ecological conservation of precious natural resources, such as clean water, can be made smarter by using cloud computing to track usage patterns and software analytics to encourage voluntary conservation efforts.
Anecdotal evidence has long suggested that consumers will voluntarily change their usage patterns to foster conservation when given clear choices about how to do so. Now the city of Dubuque, Iowa, together with IBM, has used cloud computing and analytics to determine just how much might be saved voluntarily—measuring a reduction in water utilization by 6.6 percent and an eightfold improvement in leak detection and response time.
As a part of its Smarter Sustainable Dubuque research initiative, the city of Dubuque—a community of over 60,000 residents—sponsored the Water Pilot Study, which instrumented 151 homes with smart meters for nine weeks. During that time, the city supplied the owners with a real-time readout of their current consumption rate. Households were also supplied with analysis as to how they could improve their consumption patterns and social networking tools with which they could compete for conservation awards. (An additional 152 control homes were instrumented too, but without providing those households with a readout of consumption.)
The smart meters constantly updated water usage and communicated it to the IBM Research Cloud every 15 minutes. Each household's data set was then analyzed for anomalies, which were then reported back to the households to help them understand their consumption. One surprising finding was that 8 percent of the households—12 out of 151—had water leaks that they were unaware of until their house was instrumented and its data analyzed.
Data was collected and analyzed anonymously, but the consumption patterns, trends and anomalies were shared with both city officials and other community members without revealing individuals' names or addresses. Using a Web portal, community members signed on to view both their own personal household usage patterns as well as comparisons with others and overall averages. Online games and competitions were also sponsored to promote sustainable consumption habits as well as to help consumers perceive the communitywide impact of their efforts, in terms of a reduced water bills, fewer gallons consumed and a reduction in household's carbon footprint.
In total, 89,090 gallons were saved by the 151 households over nine weeks, which would amount to more than 514,740 gallons when extrapolated to a year, or about 3,409 gallons per household annually. If the program were extended to the entire city of Dubuque, which consists of 23,000 households, the smart meters and displayed analytics would have saved households a total of over $190,930 a year.
Surveys of household members also revealed that 77 percent thought the Web portal increased their understanding of water conservation, 70 percent said they understood better the community impact of the choices they make, and 61 percent reported that they had made personal changes in the ways they used water, such as taking shorter showers, fixing leaks, purchasing more water-efficient appliances or altering their yard watering time of day.
Further Reading
Tuesday, May 24, 2011
#CHIPS: "Smarter eEyes Focus on Cure for Blind"
Real biological eyes—diagrammed at top—use arrays of retinal cells that are lined up in rows, but their interconnections (middle) use a fractal pattern common in nature, according to University of Oregon professor Richard Taylor. (Source: University of Oregon)
By designing electronic-eye (eEye) implants using fractal interconnects, researchers aim to overcome the mismatch between using conventional image chips in bionic eyes. Today, several efforts are under way worldwide to create silicon retinas that can be implanted in the eyes of the blind, thereby enabling them to see again, albeit at vastly reduced resolution. Now researchers are aiming to remedy that by replicating the fractal-like interconnection topology of real eyes.
Real biological eyes contain the equivalent of 127 million pixels, whereas conventional eEyes are currently using sensors with less than 64 pixels, and even next-generation designs are only aiming for about 1,024. What is even worse, these researchers say, is that the interconnection topology of the an image chip is a square array, whereas the interconnection matrix for the "pixels" in a real biological eye is a branching structure called a fractal.
Fractals are common among all living things as a result of growing techniques that repeat a basic set of instructions—a fractal algorithm. For instance, the trunk of a tree divides into branches using the same fractal algorithm that is used for the veins in a leaf. In nature, trees, clouds, rivers, galaxies, lungs and neurons use the same fractal pattern of interconnections.
Now researchers are aiming to replicate this technique for interconnecting imaging elements in eEyes. Today's eEyes just sink metallic electrodes—one for each pixel—into the ganglia behind the eye, which then depends on the plasticity of the visual cortex in the brain to decipher the output from these new pixels—even though they do not match the normal topology of the biological retina. However, new research efforts are developing a technique that starts with a metallic seed that then grows all the repeated branching structures that in turn mate to the optic nerve behind the eye, thereby delivering to the brain the same kind of signals as retinal neurons.
The specific algorithm harnessed by the technique is called "diffusion limited aggregation," which researchers are using to grow image sensor interconnections that mimic a natural neural topology before being surgically implanted and interfaced to the optic nerve.
This summer Professor Richard Taylor and doctoral candidate Rick Montgomery will begin a yearlong quest with Professor Simon Brown at the University of Canterbury in New Zealand to grow these metallic fractal interconnection topologies for the backside of silicon image chips.
Instead of just providing a single output for each pixel, as with conventional eEyes, image sensors with the fractal interconnects will connect to the optic nerve with the same overlapping topology used by real biological retinal neurons. As a result, the researchers hope that the brain's visual cortex can perform the same sort of functions for eEyes that it does for real eyes, enabling the blind to recover not just some vision, but a visual experience that rivals that of normal people.
One challenge cited by the researchers is finding metals that can be coaxed with diffusion limited aggregation to form the Brownian trees typical of retinal cells and yet can be safely implanted into humans without side effects. Funding is being provided by the Office of Naval Research (ONR), the U.S. Air Force and the National Science Foundation.
Further Reading
Monday, May 23, 2011
#OPTICS: "Plastic optics boosted to 25 Gbit/s"
A new vertical-cavity surface-emitting laser (VCSEL) technology from VI Systems aims to extend the reach of cheap plastic fiber optics, with scientists at Georgia Institute of Technology reporting successful operation at 25 Gbits per second. VI Systems (Berlin) reports that it has achieved 40 Gbit/s in the lab and is aiming for 100 Gbit/s performance.
Further Reading
Sunday, May 22, 2011
#ENERGY: "Smarter Hydrogen Fuel Maker Mimics Plants"
The emerging hydrogen economy depends on finding smarter ways to generate the volatile gas from plentiful natural resources, such as splitting water into hydrogen and oxygen with sunlight. Smarter hydrogen generators will ditch precious metals for fields of silicon nanopillars etched with semiconductor fabrication equipment, thus realizing the dream of cheap, abundant hydrogen fuel generated from water and sunlight a la plants.
Silicon nanopillars—each 2 microns in diameter—etched with semiconductor fabrication equipment substitute for expensive platinum electrodes, enabling cheap hydrogen generation from water and sunlight. (Source: Technical University of Denmark)
According to the Department of Energy (DoE) we should be mimicking the way plants generate their own fuel from water and sunlight, but unfortunately the price of convention electrolysis is too high due to its use of expensive platinum catalysts. To realize the dream, DoE-funded researchers are now fabricating tiny micrometer-sized pillars of cheap abundant silicon to take the place of expensive platinum catalysts, thus promising to bring down the price of hydrogen fuel and enabling widespread commercialization.
Plants use photosynthesis to produce a fuel (adenosine triphosphate) from sunlight and water, which is then stored until it is needed for respiration, growth and other normal cellular operations. The "hydrogen economy" concept mimics this operation by using sunlight to drive an artificial photosynthesis-like action more accurately termed photo-electro-chemical (PEC) water splitting.
The result could be abundant, cheap hydrogen gas that can be stored indefinitely without the need for batteries, then converted into energy on-the-fly either by burning it directly in engines or using it to produce electricity in fuel cells.
Today most hydrogen fuel is produced from natural gas, which unfortunately releases carbon dioxide as a by-product. However, if artificial photosynthesis can be perfected, then hydrogen fuel could be produced from nothing more than water and sunlight, making it cleaner and cheaper than any conventional fuel.
Unfortunately, the easiest way to split water into hydrogen and oxygen makes use of expensive platinum catalysts, but now the SLAC (originally Stanford Linear Accelerator Center) National Accelerator Laboratory, working with Stanford University and the Technical University of Denmark, believes it has eliminated the need for expensive catalysts, in favor of microscopic fields of pillars etched in silicon.
The key to the researchers' hydrogen generation method was their discovery that depositing nanoscale clusters of the molybdenum-sulfide molecules onto its fields of silicon pillars enabled them to split the hydrogen off of the oxygen in H2O (water) when exposed to sunlight. The resulting "chemical solar cell" was found to work as well as more expensive conventional designs using expensive platinum catalysts.
Now the researchers are working on a mechanism that separates the hydrogen from the oxygen generated, thus allowing each to be separately stored until needed as fuel for combustion or to produce electricity in a fuel cell.
Jens Nørskov at the DoE's SLAC National Accelerator Laboratory worked on the project with researchers at Stanford University and a team at the Technical University of Denmark led by Ib Chorkendorff and Søren Dahl.
Further Reading: http://bit.ly/NextGenLog-ij31
Silicon nanopillars—each 2 microns in diameter—etched with semiconductor fabrication equipment substitute for expensive platinum electrodes, enabling cheap hydrogen generation from water and sunlight. (Source: Technical University of Denmark)
According to the Department of Energy (DoE) we should be mimicking the way plants generate their own fuel from water and sunlight, but unfortunately the price of convention electrolysis is too high due to its use of expensive platinum catalysts. To realize the dream, DoE-funded researchers are now fabricating tiny micrometer-sized pillars of cheap abundant silicon to take the place of expensive platinum catalysts, thus promising to bring down the price of hydrogen fuel and enabling widespread commercialization.
Plants use photosynthesis to produce a fuel (adenosine triphosphate) from sunlight and water, which is then stored until it is needed for respiration, growth and other normal cellular operations. The "hydrogen economy" concept mimics this operation by using sunlight to drive an artificial photosynthesis-like action more accurately termed photo-electro-chemical (PEC) water splitting.
The result could be abundant, cheap hydrogen gas that can be stored indefinitely without the need for batteries, then converted into energy on-the-fly either by burning it directly in engines or using it to produce electricity in fuel cells.
Today most hydrogen fuel is produced from natural gas, which unfortunately releases carbon dioxide as a by-product. However, if artificial photosynthesis can be perfected, then hydrogen fuel could be produced from nothing more than water and sunlight, making it cleaner and cheaper than any conventional fuel.
Unfortunately, the easiest way to split water into hydrogen and oxygen makes use of expensive platinum catalysts, but now the SLAC (originally Stanford Linear Accelerator Center) National Accelerator Laboratory, working with Stanford University and the Technical University of Denmark, believes it has eliminated the need for expensive catalysts, in favor of microscopic fields of pillars etched in silicon.
The key to the researchers' hydrogen generation method was their discovery that depositing nanoscale clusters of the molybdenum-sulfide molecules onto its fields of silicon pillars enabled them to split the hydrogen off of the oxygen in H2O (water) when exposed to sunlight. The resulting "chemical solar cell" was found to work as well as more expensive conventional designs using expensive platinum catalysts.
Now the researchers are working on a mechanism that separates the hydrogen from the oxygen generated, thus allowing each to be separately stored until needed as fuel for combustion or to produce electricity in a fuel cell.
Jens Nørskov at the DoE's SLAC National Accelerator Laboratory worked on the project with researchers at Stanford University and a team at the Technical University of Denmark led by Ib Chorkendorff and Søren Dahl.
Further Reading: http://bit.ly/NextGenLog-ij31
Friday, May 20, 2011
#ENERGY: "Algae creates hydrogen fuel"
Algae can produce hydrogen fuel from water and sunlight, with a little boost from man-made nanoparticle catalysts, according to engineers at the U.S.Department of Energy's Argonne National Laboratory. By commandeering the photosynthesis mechanisms that enable algae to harness the energy of the sun, algae can produce abundant fuel to power an emerging hydrogen economy, they say.
Chemist Lisa Utschig tests a container of photosynthetic proteins linked with platinum nanoparticles, which can produce hydrogen from sunlight. Tiny bubbles of hydrogen are visible in the container at right.
Led by Argonne National Lab chemist Lisa Utschig, working with colleague David Tiede, the team at Argonne's Photosynthesis Group recently demonstrated how its platinum nanoparticles can be linked to key proteins in algae to coax them into producing hydrogen fuel five times more efficiently that the previous world record.
Further Reading: http://bit.ly/NextGenLog-l63E
Chemist Lisa Utschig tests a container of photosynthetic proteins linked with platinum nanoparticles, which can produce hydrogen from sunlight. Tiny bubbles of hydrogen are visible in the container at right.
Led by Argonne National Lab chemist Lisa Utschig, working with colleague David Tiede, the team at Argonne's Photosynthesis Group recently demonstrated how its platinum nanoparticles can be linked to key proteins in algae to coax them into producing hydrogen fuel five times more efficiently that the previous world record.
Further Reading: http://bit.ly/NextGenLog-l63E
#ALGORITHMS: "Cloud Makes 3D Models from Aerial Photos"
Cloud-based services are enabling fast, cheap, large-scale three-dimensional models of almost any landscape. The models are generated from easy-to-obtain aerial photos from drones—unmanned aerial vehicles.
Unmanned drones can take thousands of aerial photographs today, but stitching them together has required human expertise and sophisticated high-end software. (Source: EPFL)
New software from EPFL spinoff Pix4D automatically generates 3D models from aerial photos. (Source: EPFL)
Unmanned aerial drones (UAVs) are becoming inexpensive enough for small businesses or even individuals to use, permitting thousands of aerial photographs to be snapped of points of interest. Unfortunately, the high-powered analysis software required to stitch together aerial photos is outside the budget of all but large corporations. Now a new genre of inexpensive cloud-based services is appearing, capable not only of stitching together those patchworks of photos, but even able to automatically interpret what they see, thereby generating three-dimensional (3D) models on the cheap.
The Pix4D project does just that. A spin-off of the European research organization called the Ecole Polytechnique Federale de Lausanne (EPFL), Pix4D was named for its ability to transform the fourth dimension—time—into a method of generating 3D models from 2D images shot by aerial drones. By harnessing time, a UAV with a digital camera can take thousands of photographs from the air, capturing every possible angle of view of objects on the ground. Without smart cloud-based computing resources, however, these aerial photos would have to be hand-assembled, and even then they would only yield a flat 2D map of the area photographed.
Pix4D software running in the clouds, on the other hand, not only automatically stitches together thousands of 2D images to make accurate maps, but can also infer the 3D information needed to make a model that can then be viewed from any orientation. The cloud service works with the geo-tags on each image, comparing them with those taken at nearby times and locations, resulting in a stunning 3D model of whatever is imaged using a relatively inexpensive cloud-based service.
The Pix4D cloud service accepts a stream of related photos from which it generates a 3D model in as little as 30 minutes. The service not only automatically generates the 3D maps, but also adds points of interest that can be cataloged by users. To demonstrate the service, Pix4D took 50,000 photos of its host city—Lausanne, Switzerland—and created the world's highest-resolution 3D model of the city. The Pix4D user interface then allows users to navigate to any location in the city and view it from any orientation.
New software from EPFL spinoff Pix4D automatically generates 3D models from aerial photos. (Source: EPFL)
Don't have a ready UAV? EPFL has spun off another startup that makes an inexpensive drone. Called the senseFly, this pint-sized aerial vehicle is currently being used to take high-resolution photos for many applications, from farmers who wish to survey the evolution of their crops over large distances and long periods of time to archaeologists hunting for evidence of as yet undiscovered ruins.
Further Reading: http://bit.ly/NextGenLog-mkG1
Unmanned drones can take thousands of aerial photographs today, but stitching them together has required human expertise and sophisticated high-end software. (Source: EPFL)
New software from EPFL spinoff Pix4D automatically generates 3D models from aerial photos. (Source: EPFL)
Unmanned aerial drones (UAVs) are becoming inexpensive enough for small businesses or even individuals to use, permitting thousands of aerial photographs to be snapped of points of interest. Unfortunately, the high-powered analysis software required to stitch together aerial photos is outside the budget of all but large corporations. Now a new genre of inexpensive cloud-based services is appearing, capable not only of stitching together those patchworks of photos, but even able to automatically interpret what they see, thereby generating three-dimensional (3D) models on the cheap.
The Pix4D project does just that. A spin-off of the European research organization called the Ecole Polytechnique Federale de Lausanne (EPFL), Pix4D was named for its ability to transform the fourth dimension—time—into a method of generating 3D models from 2D images shot by aerial drones. By harnessing time, a UAV with a digital camera can take thousands of photographs from the air, capturing every possible angle of view of objects on the ground. Without smart cloud-based computing resources, however, these aerial photos would have to be hand-assembled, and even then they would only yield a flat 2D map of the area photographed.
Pix4D software running in the clouds, on the other hand, not only automatically stitches together thousands of 2D images to make accurate maps, but can also infer the 3D information needed to make a model that can then be viewed from any orientation. The cloud service works with the geo-tags on each image, comparing them with those taken at nearby times and locations, resulting in a stunning 3D model of whatever is imaged using a relatively inexpensive cloud-based service.
The Pix4D cloud service accepts a stream of related photos from which it generates a 3D model in as little as 30 minutes. The service not only automatically generates the 3D maps, but also adds points of interest that can be cataloged by users. To demonstrate the service, Pix4D took 50,000 photos of its host city—Lausanne, Switzerland—and created the world's highest-resolution 3D model of the city. The Pix4D user interface then allows users to navigate to any location in the city and view it from any orientation.
New software from EPFL spinoff Pix4D automatically generates 3D models from aerial photos. (Source: EPFL)
Don't have a ready UAV? EPFL has spun off another startup that makes an inexpensive drone. Called the senseFly, this pint-sized aerial vehicle is currently being used to take high-resolution photos for many applications, from farmers who wish to survey the evolution of their crops over large distances and long periods of time to archaeologists hunting for evidence of as yet undiscovered ruins.
Further Reading: http://bit.ly/NextGenLog-mkG1
#WIRELESS: "Smart Algorithm Untangles Network Snarls"
As the complexity of networks skyrockets, managers have been hard-pressed to come up with analytic tools that can cope, but now researchers claim to have a general-purpose technique that unsnarls nearly any network.
The driver nodes (red) that can control the rest of a network are often a very small in number, and seldom are they the most active nodes. Credit: Mauro Martino.
Researchers claim to have come up with a new computational model that can analyze any type of complex network—from the nodes of the Internet to the neurons of the brain—revealing the critical points that can be used to control the entire network...
Engineers have been using control theory to manage electronic networks since their invention—allowing the entire network to be controlled from just a few nodes with installed feedback loops that monitor input/output and adjust accordingly. Unfortunately, control theory usually assumes a closed system whose topology was as carefully architected by an engineer. However, today many networks—such as connected online communities—are self-organizing, creating a topology that is difficult to analyze and impossible to control with conventional theory.
Now researchers are claiming that a new algorithm can help steer even the most complex networks toward desired stable states, no matter if they are the result of engineering design or naturally evolved. This framework for controlling complex self-organized systems automatically identifies a set of "driver nodes," which can be used to guide the entire network's dynamics using well-known time-dependent control methodologies.
Interesting characteristics of these driver nodes, which would not have been predicted by most observers, is that the number of driver nodes in a network is inversely proportional not to the number of total nodes, but to how many connections are made by each node—called a network's degree-of-connectedness, or just "degree distribution."
For instance, networks with relatively "sparse" connections among inhomogeneous nodes are among the most difficult to control. This is because many nodes would have to be controlled to assert authority. In contrast, dense homogeneous networks can be controlled with far fewer driver nodes. For each type of network, the team calculated the percentage of driver nodes that need to be controlled in order to gain control of the entire system. The results ranged from as high as 80 percent for the most sparsely connected networks, to as few as 10 percent for the most dense.
Another result the researchers cited as particularly counterintuitive was that the specific location of driver nodes almost invariably avoids the highest-degree nodes in a network.
The new analytic technique, created by professor Jean-Jacques Slotine at the Massachusetts Institute of Technology in collaboration with professors Albert-Laszlo Barabasi and Yang-Yu Liu at Northeastern University, claim that their algorithm works for any real-life network—man-made or natural—including the Internet, cell-phone networks, social networks, gene expression networks, and the neural networks of the brain.
Further Reading: http://bit.ly/NextGenLog-iGEm
The driver nodes (red) that can control the rest of a network are often a very small in number, and seldom are they the most active nodes. Credit: Mauro Martino.
Researchers claim to have come up with a new computational model that can analyze any type of complex network—from the nodes of the Internet to the neurons of the brain—revealing the critical points that can be used to control the entire network...
Engineers have been using control theory to manage electronic networks since their invention—allowing the entire network to be controlled from just a few nodes with installed feedback loops that monitor input/output and adjust accordingly. Unfortunately, control theory usually assumes a closed system whose topology was as carefully architected by an engineer. However, today many networks—such as connected online communities—are self-organizing, creating a topology that is difficult to analyze and impossible to control with conventional theory.
Now researchers are claiming that a new algorithm can help steer even the most complex networks toward desired stable states, no matter if they are the result of engineering design or naturally evolved. This framework for controlling complex self-organized systems automatically identifies a set of "driver nodes," which can be used to guide the entire network's dynamics using well-known time-dependent control methodologies.
Interesting characteristics of these driver nodes, which would not have been predicted by most observers, is that the number of driver nodes in a network is inversely proportional not to the number of total nodes, but to how many connections are made by each node—called a network's degree-of-connectedness, or just "degree distribution."
For instance, networks with relatively "sparse" connections among inhomogeneous nodes are among the most difficult to control. This is because many nodes would have to be controlled to assert authority. In contrast, dense homogeneous networks can be controlled with far fewer driver nodes. For each type of network, the team calculated the percentage of driver nodes that need to be controlled in order to gain control of the entire system. The results ranged from as high as 80 percent for the most sparsely connected networks, to as few as 10 percent for the most dense.
Another result the researchers cited as particularly counterintuitive was that the specific location of driver nodes almost invariably avoids the highest-degree nodes in a network.
The new analytic technique, created by professor Jean-Jacques Slotine at the Massachusetts Institute of Technology in collaboration with professors Albert-Laszlo Barabasi and Yang-Yu Liu at Northeastern University, claim that their algorithm works for any real-life network—man-made or natural—including the Internet, cell-phone networks, social networks, gene expression networks, and the neural networks of the brain.
Further Reading: http://bit.ly/NextGenLog-iGEm
Wednesday, May 18, 2011
#GRAPHENE: "Activated graphene boosts supercaps"
Brookhaven National Laboratory recently characterized activated graphene fabricated at the University of Texas-Austin, concluding that it had an power density that could rival batteries, at a quick-recharge rate that exceeded batteries, and a lifetime of at least 10,000 charge/discharge cycles.
Dong Su (left) and Eric Stach study samples of activated graphene at Brookhaven’s Center for Functional Nanomaterials. Source: Brookhaven National Laboratory.
Used instead of batteries, activated-graphene supercapacitors could last 27 years for a plug-in vehicle recharged once a day. The DoE also speculates that gigantic activated-graphene supercapacitors at power-stations could smooth out power availability from intermittent power, such as wind and solar.
Further Reading: http://bit.ly/NextGenLog-jjCR
Dong Su (left) and Eric Stach study samples of activated graphene at Brookhaven’s Center for Functional Nanomaterials. Source: Brookhaven National Laboratory.
Used instead of batteries, activated-graphene supercapacitors could last 27 years for a plug-in vehicle recharged once a day. The DoE also speculates that gigantic activated-graphene supercapacitors at power-stations could smooth out power availability from intermittent power, such as wind and solar.
Further Reading: http://bit.ly/NextGenLog-jjCR
Tuesday, May 17, 2011
#CHIPS: "Silicon Labs aims to be one-stop timing shop"
Silicon Laboratories Inc. recently unveiled its long-term strategy to dominate the timing chip market following its acquisition of MEMS oscillator maker Silicon Clocks and traditional timing chip maker SpectraLinear. Silicon Labs claims its timing business grew 70 percent in 2010, resulting in a $50 million is timing chip sales, and it forecasts growth in the high double-digits again in 2011.
Further Reading: http://bit.ly/NextGenLog-lQQW
Monday, May 16, 2011
#CHIPS: "HP discovers memristor mechanism"
Electrical engineers who expressed skepticism that Hewlett Packard Co.'s memristors could switch as fast as DRAM and yet retain their memories millions of times longer than flash can now rest easy, according to HP who cited new experimental results.
Synchrotron x-rays probed the memristor in a 100 nanometer region with concentrated oxygen vacancies (right, shown in blue) where the memristive switching occurs. Surrounding this region a newly developed structural phase (red) was also found to act like a thermometer revealing how hot the device becomes when read or written.
Using their favorite formulation—titanium oxide—HP used a synchrotron x-rays to correlate the device's electrical characteristics with its atomic structure, chemistry, and temperature in three dimensions. The until now unforeseen conclusion was that a hot spot near the bottom electrode heats enough during switching to induce a crystallization of the oxide. After driving out vacancies (for a 1) or introducing them (for a 0) in one-to-two nanometers thick region, the film cools in an annealing-like like process which leaves the film in a fixed crystalline state that should remain that way indefinitely.
Further Reading: http://bit.ly/NextGenLog-lZsz
Synchrotron x-rays probed the memristor in a 100 nanometer region with concentrated oxygen vacancies (right, shown in blue) where the memristive switching occurs. Surrounding this region a newly developed structural phase (red) was also found to act like a thermometer revealing how hot the device becomes when read or written.
Using their favorite formulation—titanium oxide—HP used a synchrotron x-rays to correlate the device's electrical characteristics with its atomic structure, chemistry, and temperature in three dimensions. The until now unforeseen conclusion was that a hot spot near the bottom electrode heats enough during switching to induce a crystallization of the oxide. After driving out vacancies (for a 1) or introducing them (for a 0) in one-to-two nanometers thick region, the film cools in an annealing-like like process which leaves the film in a fixed crystalline state that should remain that way indefinitely.
Further Reading: http://bit.ly/NextGenLog-lZsz
Wednesday, May 11, 2011
#CHIPS: "Smarter Atomic Clock on a Chip Debuts"
Atomic clocks keep the world's processes on track—providing a universal time base with which everything from satellite communications to demolition explosions are synchronized. Now chip-scale atomic clocks are small enough to install inside mobile devices.
Symmetricom atomic clock on a chip based on Sandia National Laboratories technology (Source: Symmetricom)
Today accurate atomic clock readings are most commonly obtained from global positioning system (GPS) signals, but a new atomic clock on a chip will work where GPS does not reach, such as indoors, in tunnels, underground, under the sea and in outer space.
Miners, for instance, must set many charges that need to be blown up in perfect synchronization, necessitating atomic clocks that can time simultaneous processes down to a millionth of a second. Deep sea operations likewise often need precise time keepers to synchronize operations with the ships above them. Also military applications often require super precise timing, such as when clearing mines. This operation cannot depend on GPS signals that are often being blocked by electromagnetic jamming.
Telecommunication applications could also benefit from having integrated atomic clocks, for instance, to synchronize data streams when packets traverse different routes. And relay stations for cross-country telephone and Internet connections could use atomic clocks to reassemble packets into the correct order even during GPS outages.
A new atomic clock on a chip offers a solution for these applications.
Atomic clocks today are bigger than a breadbox and require a car battery to power them in the field, but Sandia National Labs, Draper Laboratory and Symmetricom have been working for almost a decade to reduce them to a chip-scale package running off two AA batteries.
The matchbook-sized atomic clock is 100 times smaller than previous commercial models, measuring only 1.5 inches square and half an inch thick, and consuming just 100 milliwatts, compared with 10 watts for conventional atomic clocks.
The secret to the new atomic clock on a chip is a solid-state laser illuminating a tiny container holding normal non-radioactive cesium vapor. The laser interrogates the cesium gas, causing its atoms to vibrate at a precise frequency that can be sensed and used to keep the clock accurate within a millionth of a second per day.
The team achieved the atomic clock on a chip by integrating a vertical-cavity surface-emitting laser (VCSEL) next to the cesium container. This reduced the power needed to illuminate the cesium by a thousand times over the rubidium atomic vapor lamp used by conventional atomic clocks. A microwave generator splits the laser beam into two closely related frequencies, which cause the cesium atoms to "beat" at their difference. A photodiode monitors the light passing through the cesium gas, counting the beats until they add up to 4,596,315,885, which is equal to one second.
Further Reading: http://bit.ly/NextGenLog-jEVV
Symmetricom atomic clock on a chip based on Sandia National Laboratories technology (Source: Symmetricom)
Today accurate atomic clock readings are most commonly obtained from global positioning system (GPS) signals, but a new atomic clock on a chip will work where GPS does not reach, such as indoors, in tunnels, underground, under the sea and in outer space.
Miners, for instance, must set many charges that need to be blown up in perfect synchronization, necessitating atomic clocks that can time simultaneous processes down to a millionth of a second. Deep sea operations likewise often need precise time keepers to synchronize operations with the ships above them. Also military applications often require super precise timing, such as when clearing mines. This operation cannot depend on GPS signals that are often being blocked by electromagnetic jamming.
Telecommunication applications could also benefit from having integrated atomic clocks, for instance, to synchronize data streams when packets traverse different routes. And relay stations for cross-country telephone and Internet connections could use atomic clocks to reassemble packets into the correct order even during GPS outages.
A new atomic clock on a chip offers a solution for these applications.
Atomic clocks today are bigger than a breadbox and require a car battery to power them in the field, but Sandia National Labs, Draper Laboratory and Symmetricom have been working for almost a decade to reduce them to a chip-scale package running off two AA batteries.
The matchbook-sized atomic clock is 100 times smaller than previous commercial models, measuring only 1.5 inches square and half an inch thick, and consuming just 100 milliwatts, compared with 10 watts for conventional atomic clocks.
The secret to the new atomic clock on a chip is a solid-state laser illuminating a tiny container holding normal non-radioactive cesium vapor. The laser interrogates the cesium gas, causing its atoms to vibrate at a precise frequency that can be sensed and used to keep the clock accurate within a millionth of a second per day.
The team achieved the atomic clock on a chip by integrating a vertical-cavity surface-emitting laser (VCSEL) next to the cesium container. This reduced the power needed to illuminate the cesium by a thousand times over the rubidium atomic vapor lamp used by conventional atomic clocks. A microwave generator splits the laser beam into two closely related frequencies, which cause the cesium atoms to "beat" at their difference. A photodiode monitors the light passing through the cesium gas, counting the beats until they add up to 4,596,315,885, which is equal to one second.
Further Reading: http://bit.ly/NextGenLog-jEVV
#3D: "MIT preps high-def, glasses-free 3-D"
A new algorithm for rendering higher resolution 3-D images was recently described by the Massachusetts Institute of Technology (MIT) Media Lab. The high-resolution 3-D technique is glasses-free, but does not reduce brightness or restrict viewer orientation as with conventional auto-stereoscopic techniques, according to its inventors.
Instead of using the unedited left and right images from a twin-lens camera, MIT's dual-stacked LCD displays uses content-adaptive parallax barriers as displayed here.
Further Reading: http://bit.ly/NextGenLog-mP8p
Instead of using the unedited left and right images from a twin-lens camera, MIT's dual-stacked LCD displays uses content-adaptive parallax barriers as displayed here.
Further Reading: http://bit.ly/NextGenLog-mP8p
Tuesday, May 10, 2011
#MATERIALS: "Paper Smartphones Use Bending Gestures"
A user demonstrates how "bending gestures" work, here turning up the corner of the flexible PaperPhone. (Source: Queen's University Human
If that smartphone or touch-screen tablet is starting to feel heavy after holding it in mid-air for a few minutes, then help is on the way. New "paper" smartphones and tablets will use flexible plastic displays and electronics light enough to wear.
A flexible Snaplet wrist-worn touch-screen tablet demo (Source: Queen's University Human Media Lab)
Most of the weight of mobile devices comes from the metal inside, where copper conductors shuttle around the electrons that make your smartphone or touch-screen tablet work. However, by switching from stiff-metal to flexible-plastic conductors, the mobile devices of the future will be feather-light, thinner, less expensive and consume tiny amounts of power compared with today.
Within the decade, according to researchers at the Queen's University Human Media Lab (Kingston, Ontario), mobile devices will resemble an "interactive sheet of paper." And to back up the boast, Human Media Lab recently showed off what is billed as the "world's first interactive paper computer."
The PaperPhone uses a thin-film display from E-Ink, but instead of marrying it to a glass substrate as is done by virtually all e-book makers, the Human Media Lab merely encapsulated it in a clear polymer. Because the entire assembly is flexible, it could be used flat or bent into a bracelet.
The PaperPhone also used its flexibility to control functions, for instance, scrolling to the next page when you flick its upper right corner. The researchers identified multiple "bend gestures," such as flexing one side forward or backward, that will be used to control actions on the screen. The current prototype uses six integrated bend sensors on the backside of the flexible E-Ink display, which is being used to explore and test new bending gestures. An FPC (flexible printed circuit), using DuPont's Pyralux flexible circuit material on which solid conductive ink was printed, was attached to the back side of the display to read the bend sensors and trigger the related on-screen functions.
The Human Media Lab also predicts that flexible PaperPhones will lead to paper touch-screen tablets and may even eliminate the need for printers, since text and images can merely be loaded onto the flexible displays that everybody is already carrying around in their pockets.
Further Reading: http://bit.ly/NextGenLog-jdqn
Monday, May 09, 2011
#MATERIALS: "Graphene modulator tackles optics"
The world's smallest graphene modulator was unveiled recently by researchers at the National Science Foundation (NSF) Nanoscale Science and Engineering Center at the University of California-Berkeley. The research team, led by professor Xiang Zhang, claimed its breakthrough will someday allow smartphones to download entire movies in a matter of seconds.
The world's smallest graphene modulator uses electrical signals to switch an laser on and off for faster, smaller, cheaper optical communications. Source: UC Berkeley
Today, optical modulators are used to speed communications by using electrical signals to switch a laser on and off for long-haul communications between systems. However, high-speed optical communications is migrating to short-haul communications and someday may even be used by mobile devices to quicken the transfer of large files.
Further Reading: http://bit.ly/NextGenLog-jl3j
The world's smallest graphene modulator uses electrical signals to switch an laser on and off for faster, smaller, cheaper optical communications. Source: UC Berkeley
Today, optical modulators are used to speed communications by using electrical signals to switch a laser on and off for long-haul communications between systems. However, high-speed optical communications is migrating to short-haul communications and someday may even be used by mobile devices to quicken the transfer of large files.
Further Reading: http://bit.ly/NextGenLog-jl3j
#CHIPS: "MEMS touted for telecomm, embedded apps"
SiTime's chips wire-bond together a mechanical MEMS device with an application specific integrated circuit (ASIC) which are mounted together inside a standard package.
The increasingly stringent requirements of high-speed telecommunications, wireless networking and embedded applications can now be satisfied with micro-electro-mechanical systems, according to SiTime Inc., which announced Monday (May 9) its new SiT380X family of MEMS voltage-controlled oscillators (VCOs), offering up to 10-times better linearity and wider fine-tuning range than quartz crystals.
Further Reading: http://bit.ly/NextGenLog-iq2S
#MARKETS: "Smarter Enterprises Measure Brand Appeal"
Apple's "I'm a Mac" versus "I'm a PC" campaign was designed to enhance its brand's appeal
Recall Apple's "I'm a Mac" versus "I'm a PC" campaign? Apple, like every other enterprise, wants to establish its "brand" as desirable to consumers, and now there's a better way to measure it, according to university researchers.
Focus-group raves and rip-roaring sales have been the best ways to measure a brand's appeal, but now researchers at North Carolina State University claim to have a better method to calibrate branding.
Each enterprise seeks to establish a brand "personality" that reflects its goals and aspirations, as defined by its board of directors and communicated by its chief executive officer. Many methods have been developed to get the point across to customers, and a few tools, such as Stanford University Jennifer Aaker's Brand Personality Scale, can measure the perceived quality of brands—as rugged, sophisticated, competent, exciting or sincere. However, there has been no objective way to quantify a brand's appeal until now.
North Carolina State University researchers claim to have remedied that problem with a simple, easy-to-use method of measuring the brand personality appeal (BPA). According to David Henard, an associate professor of business management, measuring BPA consists of getting accurate answers to just a few basic questions.
"The only existing scale was Aaker’s Brand Personality Scale," said Henard, who performed the work with Traci Freling at the University of Texas (Arlington) and Jody Crosno at West Virginia University. "What we’ve done is develop a system that digs deeper to help companies link brand personality to concrete outcomes."
The researchers claim that with proper grooming, a brand personality will lead people to favor one product over others, depending on how high its BPA is elevated. The best part is that BPA depends on just three dimensions:
● Favorability—how positively a brand is viewed
● Originality—how distinct a brand's personality is from competitors
● Clarity—how clearly the brand personality is perceived by consumers.
Whether its IBM or Lady Gaga, these researchers claim that brand personality appeal all depends on the mix of favorability, originality, and clarity. Using these three variables, the researcher team has established an objective measurement system for BPA that uses just 16 questions to accurately assess the appeal of an enterprise's brand.
What is even better is that by measuring each dimension separately, it is possible to assess what needs to be done to improve an enterprise's brand appeal. For instance, if an enterprise measures high in originality and clarity, but low on favorability, the enterprise should know to focus its marketing efforts on improving favorability, temporarily shelving campaigns focused on originality and clarity.
However, the researchers caution that BPA does not measure success—indeed many successful companies with low brand appeal, such as military contractors, succeed by focusing on originality. Nevertheless, for most enterprises there are many benefits to a high brand personality appeal, according to the researchers, not least of which is higher consumer trust and brand loyalty.
Further Reading: http://bit.ly/NextGenLog-jvaG
Friday, May 06, 2011
#MATERIALS: "Seeing Is Believing When Cloaks Disappear"
The world's first visible-light cloak recently converted nonbelievers by making a tiny object disappear—Harry Potter style. Transformational optics will enable a future where what you see is not always what is really there, according to the Karlsruhe Institute of Technology, which recently pulled off the first disappearance act to cloak a normal visible object.
The internal structure of the carpet cloak arrays pillars at the spacing of a photonic crystal which matches the wavelength of light to be cloaked, thereby bending it around hidden objects. Credit: KIT
Invisibility cloaks have been shown for infrared or microwave wavelengths, and Northwestern and Oklahoma State universities showed the world's first terahertz (which lies between infrared and microwave) cloaking at a recent conference. KIT, however, claims to have brought the magic of transformational optics to the normal visible-light spectrum.
By sculpting a "woodpile" of parallel pillars into a layered polymer that acts as a photonic crystal, the resulting "carpet cloak" shields objects illuminated with normal, unpolarized, red light—albeit only in an area half the size of a human hair. By making the spacings of the pillars the same as the light being transmitted through it, the material can bend light in almost any direction. Transformation optics works by continuously adjusting this spacing to guide light around objects, enabling invisibility cloaks.
The cloak works by adjusting the local phase velocity of light with the pillars’ spacing, but the beauty of the process is that it can be accomplished with direct-write lasers. The process works by first laying down a pattern of nanoscale-spaced pillars by etching a single layer of polymer. Then a filler is spread into the etched-out areas, a new layer of polymer is laid down, and the process repeats. After the material has been built up to whatever size is needed for the cloaked area, the filler is dissolved leaving just the woodpile of light-bending pillars. Even though the cloak only worked for red light (700-nanometer wavelength), KIT nevertheless claimed it was the first demonstration for normal unpolarized visible light.
KIT's lead researcher, Joachim Fischer, explained at CLEO that the key to achieving the nanoscale spacings necessary to cloak visible light was adapting diffraction-unlimited microscopy techniques to the laser direct-writing process. Those adjustments allowed the researchers to dramatically increase their etching resolution to the nanometer scale, thus achieving a visible light cloak where others were at longer wavelengths of infrared (micrometer), terahertz (millimeter) or microwaves (centimeter).
Next, KIT is aiming to half the spacing between its pillars again, to shrink their size enough that the carpet cloak will work at all visible wavelengths—not just red—finally realizing the Harry Potter dream. The researchers are also looking for ways to increase the area concealed.
Besides invisibility cloaks, the researchers believe their technique will enable flat, aberration-free lenses that can be integrated onto future optical microchips. Fischer's team is also working on what KIT calls optical "black holes" that could improve the efficiency of solar cells by letting them harvest a broader swath of wavelengths than today's solar cells, which must be tuned to a region's typical spectrum.
Further Reading: http://bit.ly/NextGenLog-mgyc
The internal structure of the carpet cloak arrays pillars at the spacing of a photonic crystal which matches the wavelength of light to be cloaked, thereby bending it around hidden objects. Credit: KIT
Invisibility cloaks have been shown for infrared or microwave wavelengths, and Northwestern and Oklahoma State universities showed the world's first terahertz (which lies between infrared and microwave) cloaking at a recent conference. KIT, however, claims to have brought the magic of transformational optics to the normal visible-light spectrum.
By sculpting a "woodpile" of parallel pillars into a layered polymer that acts as a photonic crystal, the resulting "carpet cloak" shields objects illuminated with normal, unpolarized, red light—albeit only in an area half the size of a human hair. By making the spacings of the pillars the same as the light being transmitted through it, the material can bend light in almost any direction. Transformation optics works by continuously adjusting this spacing to guide light around objects, enabling invisibility cloaks.
The cloak works by adjusting the local phase velocity of light with the pillars’ spacing, but the beauty of the process is that it can be accomplished with direct-write lasers. The process works by first laying down a pattern of nanoscale-spaced pillars by etching a single layer of polymer. Then a filler is spread into the etched-out areas, a new layer of polymer is laid down, and the process repeats. After the material has been built up to whatever size is needed for the cloaked area, the filler is dissolved leaving just the woodpile of light-bending pillars. Even though the cloak only worked for red light (700-nanometer wavelength), KIT nevertheless claimed it was the first demonstration for normal unpolarized visible light.
KIT's lead researcher, Joachim Fischer, explained at CLEO that the key to achieving the nanoscale spacings necessary to cloak visible light was adapting diffraction-unlimited microscopy techniques to the laser direct-writing process. Those adjustments allowed the researchers to dramatically increase their etching resolution to the nanometer scale, thus achieving a visible light cloak where others were at longer wavelengths of infrared (micrometer), terahertz (millimeter) or microwaves (centimeter).
Next, KIT is aiming to half the spacing between its pillars again, to shrink their size enough that the carpet cloak will work at all visible wavelengths—not just red—finally realizing the Harry Potter dream. The researchers are also looking for ways to increase the area concealed.
Besides invisibility cloaks, the researchers believe their technique will enable flat, aberration-free lenses that can be integrated onto future optical microchips. Fischer's team is also working on what KIT calls optical "black holes" that could improve the efficiency of solar cells by letting them harvest a broader swath of wavelengths than today's solar cells, which must be tuned to a region's typical spectrum.
Further Reading: http://bit.ly/NextGenLog-mgyc
#MATERIALS: "Lab aims for superconducting FET"
Brookhaven physicist Ivan Bozovic wants to understand why a thin-film insulator transitions to the superconducting state.
The resistance of superconductivity to rational explanation has prompted Brookhaven National Laboratory to fabricate atomically perfect ultra-thin-films capable of accurately characterize the transition from an insulator to superconductor. A normally insulating copper-oxide material (cuprate) was configured like the channel of a field-effect transistor (FET), using molecular beam epitaxy to create an atomically perfect superconducting film.
Further Reading: http://bit.ly/NextGenLog-jEVM
#TOUCH: "Touch-Screen Musicians Jam in Virtual Band"
Smarter drummers have always made music by tapping their fingers against tabletops. Now musicians who play the bass, drums and keyboard are getting into the act with specialized apps for large, flat-panel touch screens. Touch screens as large as 46 inches are enabling musicians to tap out melody, bass and rhythm on virtual instruments, then record the audio all without touching a mouse.
Apps are turning touchscreens into musical instruments. (Source: NextWindow)
Once hailed as a revolutionary advance in man-machine interfaces, the mouse is starting to appear long-in-the-tooth when compared with giant touch screens that enable an entire user interface to be controlled by touch. Musicians especially can benefit from touch screens that offer dedicated keys, buttons, switches, rotary dials and other controller surfaces that operate like conventional music hardware, but can be instantly configured or swapped out.
Touch screens today are mostly confined to tiny smartphones or marginally larger tablets, but the availability of large, touch-screen flat-panel displays has induced the music software industry to adapt their virtual instruments and audio recording applications to touch screens.
To prove the point, NextWindow recently put together a virtual-band stage consisting of 13 large, flat-panel touch screens—seven 46-inch touch screens and six HP All-in-One desktop touch computers—featuring every instrument usually found in a band—including bass, keyboards and drums—as well as the associated electronic mixing boards necessary to record tracks. Then they invited Megan Slankard and her band on stage to record one of her latest songs using touch-screen technology that they had never used before. After just a few hours of practice with the virtual instruments, the band recorded a virtual version of "Sails" from its album "A Little Extra Sun."
"It was really cool to see how [touch-screen technology] works and how you could use these instruments in a live setting," said Slankard. "On the touch screens in the video we play drums, keyboards and bass."
The drummer used screens dedicated to three different drum sets, including EZdrummer's "Drumkit From Hell," Latin Percussion's EZX and Fingertapp's Drums. With the virtual instruments spread out across the touch screen—looking like a window onto a photorealistic drum kit—the drummer used his sticks and fingers to tap out rhythms.
The keyboard player went virtual by tickling virtual ivories presented by two apps, Cakewalk Studio Instruments' Electric Piano and Fingertapp's Piano. Although the touch screens did not provide the tactile feedback of a real keyboard, they did offer the advantage of instant configurability and novel soundscapes.
Even the band's bassist laid down licks using a virtual bass guitar that duplicated its fretboard.
Finally, the audio engineer made the recording with a virtual mixing board from FL Studio, which includes touch-screen controls for both recording and sequencing sound.
The larger touch screens used were made by attaching NextWindow's 2700 Touch Overlay on a 46-inch LCD panel, while the 23-inch HP TouchSmart All-in-One PCs used NextWindow's 1900 Series Desktop Touch platform.
Further Reading: http://bit.ly/NextGenLog-k75C
Apps are turning touchscreens into musical instruments. (Source: NextWindow)
Once hailed as a revolutionary advance in man-machine interfaces, the mouse is starting to appear long-in-the-tooth when compared with giant touch screens that enable an entire user interface to be controlled by touch. Musicians especially can benefit from touch screens that offer dedicated keys, buttons, switches, rotary dials and other controller surfaces that operate like conventional music hardware, but can be instantly configured or swapped out.
Touch screens today are mostly confined to tiny smartphones or marginally larger tablets, but the availability of large, touch-screen flat-panel displays has induced the music software industry to adapt their virtual instruments and audio recording applications to touch screens.
To prove the point, NextWindow recently put together a virtual-band stage consisting of 13 large, flat-panel touch screens—seven 46-inch touch screens and six HP All-in-One desktop touch computers—featuring every instrument usually found in a band—including bass, keyboards and drums—as well as the associated electronic mixing boards necessary to record tracks. Then they invited Megan Slankard and her band on stage to record one of her latest songs using touch-screen technology that they had never used before. After just a few hours of practice with the virtual instruments, the band recorded a virtual version of "Sails" from its album "A Little Extra Sun."
"It was really cool to see how [touch-screen technology] works and how you could use these instruments in a live setting," said Slankard. "On the touch screens in the video we play drums, keyboards and bass."
The drummer used screens dedicated to three different drum sets, including EZdrummer's "Drumkit From Hell," Latin Percussion's EZX and Fingertapp's Drums. With the virtual instruments spread out across the touch screen—looking like a window onto a photorealistic drum kit—the drummer used his sticks and fingers to tap out rhythms.
The keyboard player went virtual by tickling virtual ivories presented by two apps, Cakewalk Studio Instruments' Electric Piano and Fingertapp's Piano. Although the touch screens did not provide the tactile feedback of a real keyboard, they did offer the advantage of instant configurability and novel soundscapes.
Even the band's bassist laid down licks using a virtual bass guitar that duplicated its fretboard.
Finally, the audio engineer made the recording with a virtual mixing board from FL Studio, which includes touch-screen controls for both recording and sequencing sound.
The larger touch screens used were made by attaching NextWindow's 2700 Touch Overlay on a 46-inch LCD panel, while the 23-inch HP TouchSmart All-in-One PCs used NextWindow's 1900 Series Desktop Touch platform.
Further Reading: http://bit.ly/NextGenLog-k75C
Thursday, May 05, 2011
#SPACE: "James Webb Space Telescope Smarter Than Hubble"
Two alternative images of the Carina Nebula from the Hubble Space Telescope compare visible (top) with the higher resolution obtained by infrared (bottom) photography as will be used by the James Webb Space Telescope.
The stunning images taken by the Hubble Space Telescope will be like sketches compared with improved images to be delivered by the James Webb Space Telescope, the world’s next-generation space observatory. The James Webb Space Telescope will outperform the Hubble's single mirror by focusing the light from 18 separate mirrors onto a single sensor, enabling ultra-high-resolution observation of the most distant objects in the universe.
Further Reading: http://bit.ly/NextGenLog-ktXH
Wednesday, May 04, 2011
#CHIPS: "New IBM Fellows push computing frontiers"
IBM anointed eight innovators as its newest Fellows on Wednesday, May 4. The honorees included the principal investigator behind Watson, the supercomputer that recently beat human champions at Jeopardy.
David Ferruci was named IBM Fellow for his pioneering work in machine question answering that resulting in the Watson supercomputer beating a human at Jeopardy.
IBM bestows the honor of Fellow on its most prolific innovators in a practice started by Thomas J. Watson himself in 1962 as a way to encourage creativity. Of the 231 individuals who have been named Fellows since the program’s inception, 71 are active IBM employees. Past honorees include pioneers in such technologies as reduced instruction set computing (RISC), thin-film recording heads, DRAM, relational databases, the trackpoint, virtual memory, the scanning tunneling microscope (STM), Fortran and the AT bus on the original IBM personal computer. Fellows are typically given greater responsibilities in their area of expertise and are granted virtual carte blanche for choosing specific projects..
Further Reading: http://bit.ly/NextGenLog-kK8a
David Ferruci was named IBM Fellow for his pioneering work in machine question answering that resulting in the Watson supercomputer beating a human at Jeopardy.
IBM bestows the honor of Fellow on its most prolific innovators in a practice started by Thomas J. Watson himself in 1962 as a way to encourage creativity. Of the 231 individuals who have been named Fellows since the program’s inception, 71 are active IBM employees. Past honorees include pioneers in such technologies as reduced instruction set computing (RISC), thin-film recording heads, DRAM, relational databases, the trackpoint, virtual memory, the scanning tunneling microscope (STM), Fortran and the AT bus on the original IBM personal computer. Fellows are typically given greater responsibilities in their area of expertise and are granted virtual carte blanche for choosing specific projects..
Further Reading: http://bit.ly/NextGenLog-kK8a
Tuesday, May 03, 2011
#ALGORITHMS: "Smart Appliances Hook to Smart Grid"
Typical Japanese housewives demonstrate how taking one beer from the fridge (right) automatically decrements the beer-on-hand counter (left) displayed on the touchscreen.
One appliance manufacturer has announced a yearlong rollout for its new line of smart appliances, each with touch-screen apps and remote access from smartphones and touch-screen tablets. One appliance manufacturer, LG, is pioneering smart-grid-ready appliances with new models slated to roll out in nearly every month of this year. First in the line will be a smart refrigerator whose integrated touch screen and remote access capability reveal what food items are inside, their expiration dates and suggested recipes using only on-hand items.
Further Reading: http://bit.ly/NextGenLog-l0Fr
#ROBOTICS: "Freescale mechatronics contest taps Tower-powered FreeBots"
Freescale Semiconductor Inc. announced its Make It Challenge Tuesday, May 3, at the Embedded Systems Conference Silicon Valley. Engineers enter the contest by enrolling in a hands-on workshop, where they will receive a free biped robot studded with Freescale sensors with which to create a unique mechatronics application. Up to 100 robot builders, plus 100 more in a parallel systems track, will share a $12,000 purse.
Further Reading: http://bit.ly/NextGenLog-lxlj
Subscribe to:
Posts (Atom)