Spansion CEO has a different perspective on next-gen user interfaces, maintaining that smart memory is needed to ensure that fast response times can be obtained when fast broadband access to cloud resources is absent.
R. Colin Johnson
User interfaces have evolved from keyboard and mouse to user-aware voice control, but need smarter memory to make the jump to augmented-reality displays that don't depend on cloud connectivity.
Here is what EETimes says about next-gen user interfaces: Next-generation augmented reality displays will employ smart-recognition of their users that melds context-sensitive voice-and-gesture commands with total-surrounding awareness of other people, places and things, according to John Kispert, CEO of Spansion Inc., who gave the keynote address at the Globalpress Electronics Summit 2012 last week. But smart memory will be needed to wean these advanced user-interfaces off cloud connectivity dependence, Kispert said.
Further Reading
Monday, April 30, 2012
#MARKETS: "Technology Barometer Measures Consumer Interest"
A new technology "barometer" with cross-tabulation capabilities allows marketing organizations to create targeted reports on consumer demand. The new cross-tab feature enables customization for specific enterprise usage models, according to Allied Business Intelligence (ABI Research).
R. Colin Johnson
Here is what ABI says about its technology barometer service: ABI Research’s Technology Barometer™ Research Services are based on bi-annual primary research tracking studies and they provide dynamic views and insight into the constantly changing consumer technology market. Every six months, consumers in seven different countries complete comprehensive online questionnaires that gather information on their usage of and purchase intentions for a wide variety of existing consumer technology products and services, as well as receptivity to emerging products and services. The Technology Barometer’s latest news includes 25% of Consumers Intend to Purchase Smartphones and HDTVs in the First Half of 2012, as well as Samsung Tops for HDTV Purchase Intent Among US Consumers.
ABI Research has also developed an easy-to-use cross-tab tool to view and compare the underlying data within the Technology Barometer™ survey results that addresses the unique needs of each customer, increasing the return on investment for market intelligence.
The online and interactive cross-tabulation generator gives clients the opportunity to customize their own reports from questions in a single survey. “The online crosstab tool is a great way for clients to take control of survey results and perform comparisons that are most relevant to their business,” says Jeff Orr, group director, consumer research
Now, clients of the services that wish to examine the underlying data can do so themselves via a simple, web-based tool. The self-service capability includes the ability to generate tables comparing the results of any single question to the results of any other question, enabling them to dig deeper into the data and explore relationships between groups. Data can be viewed by frequencies or percentages and the resulting tables can be exported as Excel files.
The new crosstab generator tool is part of the firm’s three Technology Barometer™ Research Services: Technology Barometer: Digital Lifestyle, Technology Barometer: Connected Home and Computing, and Technology Barometer: Mobile. Further Reading
R. Colin Johnson
Here is what ABI says about its technology barometer service: ABI Research’s Technology Barometer™ Research Services are based on bi-annual primary research tracking studies and they provide dynamic views and insight into the constantly changing consumer technology market. Every six months, consumers in seven different countries complete comprehensive online questionnaires that gather information on their usage of and purchase intentions for a wide variety of existing consumer technology products and services, as well as receptivity to emerging products and services. The Technology Barometer’s latest news includes 25% of Consumers Intend to Purchase Smartphones and HDTVs in the First Half of 2012, as well as Samsung Tops for HDTV Purchase Intent Among US Consumers.
ABI Research has also developed an easy-to-use cross-tab tool to view and compare the underlying data within the Technology Barometer™ survey results that addresses the unique needs of each customer, increasing the return on investment for market intelligence.
The online and interactive cross-tabulation generator gives clients the opportunity to customize their own reports from questions in a single survey. “The online crosstab tool is a great way for clients to take control of survey results and perform comparisons that are most relevant to their business,” says Jeff Orr, group director, consumer research
Now, clients of the services that wish to examine the underlying data can do so themselves via a simple, web-based tool. The self-service capability includes the ability to generate tables comparing the results of any single question to the results of any other question, enabling them to dig deeper into the data and explore relationships between groups. Data can be viewed by frequencies or percentages and the resulting tables can be exported as Excel files.
The new crosstab generator tool is part of the firm’s three Technology Barometer™ Research Services: Technology Barometer: Digital Lifestyle, Technology Barometer: Connected Home and Computing, and Technology Barometer: Mobile. Further Reading
Friday, April 27, 2012
#ALGORITHMS: "Hackathon to Craft Business Apps"
A hackathon will challenge programmers to create business apps by pairing hackers with veteran software engineers starting Monday April 30th during the RallyON 2012 Conference. Rally Software, which aids enterprises to build better software, will match hackers with ustomers like HP, Intel and SAS to create apps for the Rally platform. Each of the new apps will be made available through Rally's partnerships with GitHub and StackExchange. Only a few slots are left for hackers, who must be Rally Software customers.
R. Colin Johnson
Here is what Rally Software says about the hackathon: At RallyON 2012, we will feature a customer Hack-a-Thon where you will pair with a Rally Software engineer and build a custom Rally App. Be among the first to receive technical training on our new platform, preview our product enhancements, and develop apps using our new AppSDK 2.0 and Lookback API. At the end of the two and a half days in Boulder, you will demo your new app to conference attendees and be eligible to win prizes awarded to the top team.
The Hack-a-Thon caters directly to engineers and developers who want to roll up their sleeves and dive in with Rally engineers. To maintain integrity and consistency in the process, we are limiting this portion of the conference to 15 registrants. We only have 5 spots left! Those considering attending should be proficient in JavaScript and HTML and have a desire to customize Rally to meet an Agile project management or process customization need. Please consider registering soon to reserve your spot!
Further Reading
R. Colin Johnson
Here is what Rally Software says about the hackathon: At RallyON 2012, we will feature a customer Hack-a-Thon where you will pair with a Rally Software engineer and build a custom Rally App. Be among the first to receive technical training on our new platform, preview our product enhancements, and develop apps using our new AppSDK 2.0 and Lookback API. At the end of the two and a half days in Boulder, you will demo your new app to conference attendees and be eligible to win prizes awarded to the top team.
The Hack-a-Thon caters directly to engineers and developers who want to roll up their sleeves and dive in with Rally engineers. To maintain integrity and consistency in the process, we are limiting this portion of the conference to 15 registrants. We only have 5 spots left! Those considering attending should be proficient in JavaScript and HTML and have a desire to customize Rally to meet an Agile project management or process customization need. Please consider registering soon to reserve your spot!
Further Reading
#ENERGY: "Nanocrystalline Printable Solar Cells Maturing"
The promise of ultra-cheap solar cells that can be printed like ink on almost any surface got closer to realization recently, according to researchers at the University of Southern California. USC claims its scientists that nanocrystals of cadmium selenide in a liquid vehicle can be printed onto polymers with a new kind of highly conductive ligand that should now make the technique commercially feasible.
R. Colin Johnson
A USC scientist treats a glass slide with nanocrystals. (Photo/Dietmar Quistorf)
Here is what USC says about their researcher breakthrough: Scientists at USC have developed a potential pathway to cheap, stable solar cells made from nanocrystals so small they can exist as a liquid ink and be painted or printed onto clear surfaces.
Richard Brutchey, assistant professor of chemistry at the USC Dornsife College of Letters, Arts and Sciences, and USC postdoctoral researcher David Webber developed a new surface coating for the nanocrystals, which are made of the semiconductor cadmium selenide.
Liquid nanocrystal solar cells are cheaper to fabricate than available single-crystal silicon wafer solar cells but are not nearly as efficient at converting sunlight to electricity. Brutchey and Webber solved one of the key problems of liquid solar cells: how to create a stable liquid that also conducts electricity.
In the past, organic ligand molecules were attached to the nanocrystals to keep them stable and to prevent them from sticking together. These molecules also insulated the crystals, making the whole thing terrible at conducting electricity.
Brutchey and Webber discovered a synthetic ligand that not only works well at stabilizing nanocrystals but actually builds tiny bridges connecting the nanocrystals to help transmit current.
With a relatively low-temperature process, the researchers’ method also allows for the possibility that solar cells can be printed onto plastic instead of glass without any issues with melting, resulting in a flexible solar panel that can be shaped to fit anywhere.
As they continue their research, Brutchey said he plans to work on nanocrystals built from materials other than cadmium, which is restricted in commercial applications due to toxicity.
The National Science Foundation and USC Dornsife funded the research.
Further Reading
R. Colin Johnson
A USC scientist treats a glass slide with nanocrystals. (Photo/Dietmar Quistorf)
Here is what USC says about their researcher breakthrough: Scientists at USC have developed a potential pathway to cheap, stable solar cells made from nanocrystals so small they can exist as a liquid ink and be painted or printed onto clear surfaces.
Richard Brutchey, assistant professor of chemistry at the USC Dornsife College of Letters, Arts and Sciences, and USC postdoctoral researcher David Webber developed a new surface coating for the nanocrystals, which are made of the semiconductor cadmium selenide.
Liquid nanocrystal solar cells are cheaper to fabricate than available single-crystal silicon wafer solar cells but are not nearly as efficient at converting sunlight to electricity. Brutchey and Webber solved one of the key problems of liquid solar cells: how to create a stable liquid that also conducts electricity.
In the past, organic ligand molecules were attached to the nanocrystals to keep them stable and to prevent them from sticking together. These molecules also insulated the crystals, making the whole thing terrible at conducting electricity.
Brutchey and Webber discovered a synthetic ligand that not only works well at stabilizing nanocrystals but actually builds tiny bridges connecting the nanocrystals to help transmit current.
With a relatively low-temperature process, the researchers’ method also allows for the possibility that solar cells can be printed onto plastic instead of glass without any issues with melting, resulting in a flexible solar panel that can be shaped to fit anywhere.
As they continue their research, Brutchey said he plans to work on nanocrystals built from materials other than cadmium, which is restricted in commercial applications due to toxicity.
The National Science Foundation and USC Dornsife funded the research.
Further Reading
Thursday, April 26, 2012
#WIRELESS: "Geocoder Simplifies Location-Based Business"
TomTom’s Global Geocoder was recently announced at the Geospatial World Forum. TomTom’s Global Geocoder is a batch-oriented geocoding web service that fuses geographic knowledge with business information for location-based analytics. R. Colin Johnson
Here is what TomTom says about its Global: Geocoding is the process of converting addresses into geographic coordinates to allow location analysis. By combining geographic knowledge with business information, businesses can make smarter decisions that will lead to better products, as well as cost savings and process improvements. For example, insurance companies are relying on geocoding techniques to help set premiums and make underwriting decisions based on the physical locations of the insurance projects.
The TomTom Global Geocoder offers the following benefits:
· High volume results in one easy step, with no usage restrictions
· International coverage enables one stop for all geocoding needs
· Highly accurate, address point level matching
· Fast results delivering hundreds of thousands of records per hour.
Further Reading
Here is what TomTom says about its Global: Geocoding is the process of converting addresses into geographic coordinates to allow location analysis. By combining geographic knowledge with business information, businesses can make smarter decisions that will lead to better products, as well as cost savings and process improvements. For example, insurance companies are relying on geocoding techniques to help set premiums and make underwriting decisions based on the physical locations of the insurance projects.
The TomTom Global Geocoder offers the following benefits:
· High volume results in one easy step, with no usage restrictions
· International coverage enables one stop for all geocoding needs
· Highly accurate, address point level matching
· Fast results delivering hundreds of thousands of records per hour.
Further Reading
Wednesday, April 25, 2012
#CHIPS: "Intersil's chip set enables Micron Tech LCOS display"
Pico-projectors are not just for kids anymore, with myriad professional application springing forth from these pint sized augmented reality enablers. R. Colin Johnson
Intersil's pico-projector reference design allows engineers to quickly prototype novel applications such as keyboard's projected onto tabletop.
Intersil Corp. is offering specialized mixed-signal chip sets in a reference design using Micron Technologies Inc.'s liquid crystal on silicon (LCoS) display for novel pico-projector applications.
Pico-projectors are being designed into myriad applications, from augmented reality to reveal muscles/veins under the skin for surgeons or the wiring/pipes inside walls for construction workers, to projecting virtual keyboards or control panels on to desktops for knowledge workers.
However, the application driving mass production—and consequent lower prices—will be social networking apps, according to Intersil, which aims to prompt consumer electronics makers to start building-in tiny 10-millimeter pico-projectors modules into their smartphones starting with an integrated mixed-signal chip set.
Further Reading
Intersil's pico-projector reference design allows engineers to quickly prototype novel applications such as keyboard's projected onto tabletop.
Intersil Corp. is offering specialized mixed-signal chip sets in a reference design using Micron Technologies Inc.'s liquid crystal on silicon (LCoS) display for novel pico-projector applications.
Pico-projectors are being designed into myriad applications, from augmented reality to reveal muscles/veins under the skin for surgeons or the wiring/pipes inside walls for construction workers, to projecting virtual keyboards or control panels on to desktops for knowledge workers.
However, the application driving mass production—and consequent lower prices—will be social networking apps, according to Intersil, which aims to prompt consumer electronics makers to start building-in tiny 10-millimeter pico-projectors modules into their smartphones starting with an integrated mixed-signal chip set.
Further Reading
#QUANTUM: "NIST Simulator Qualifies New Miracle Materials"
The National Institute of Standards and Technology (NIST) has created a simulator that should facilitate the development of new materials to be used in future quantum computers. R. Colin Johnson
The NIST quantum simulator permits study of quantum systems that are difficult to study in the laboratory and impossible to model with a supercomputer. In this photograph of the crystal, the ions are fluorescing, indicating the qubits are all in the same state. Under the right experimental conditions, the ion crystal spontaneously forms this nearly perfect triangular lattice structure. Credit: Britton/NIST
Here is what NIST says about its quantum: Physicists at the National Institute of Standards and Technology (NIST) have built a quantum simulator that can engineer interactions among hundreds of quantum bits (qubits)—10 times more than previous devices. The simulator has passed a series of important benchmarking tests and The NIST quantum simulator permits study of quantum systems that are difficult to study in the laboratory and impossible to model with a supercomputer.
The heart of the simulator is a two-dimensional crystal of beryllium ions (blue spheres); the outermost electron of each ion is a quantum bit (qubit, red arrows). The ions are confined by a large magnetic field in a device called a Penning trap (not shown). Inside the trap the crystal rotates clockwise. scientists are poised to study problems in material science that are impossible to model on conventional computers.Credit: Britton/NIST
Many important problems in physics especially low-temperature physics—remain poorly understood because the underlying quantum mechanics is vastly complex. Conventional computers—even supercomputers—are inadequate for simulating quantum systems with as few as 30 particles. Better computational tools are needed to understand and rationally design materials, such as high-temperature superconductors, whose properties are believed to depend on the collective quantum behavior of hundreds of particles.
The NIST simulator consists of a tiny, single-plane crystal of hundreds of beryllium ions, less than 1 millimeter in diameter, hovering inside a device called a Penning trap. The outermost electron of each ion acts as a tiny quantum magnet and is used as a qubit—the quantum equivalent of a “1” or a “0” in a conventional computer. In the benchmarking experiment, physicists used laser beams to cool the ions to near absolute zero. Carefully timed microwave and laser pulses then caused the qubits to interact, mimicking the quantum behavior of materials otherwise very difficult to study in the laboratory. Although the two systems may outwardly appear dissimilar, their behavior is engineered to be mathematically identical. In this way, simulators allow researchers to vary parameters that couldn’t be changed in natural solids, such as atomic lattice spacing and geometry. In the NIST benchmarking experiments, the strength of the interactions was intentionally weak so that the simulation remained simple enough to be confirmed by a classical computer. Ongoing research uses much stronger interactions.
Simulators exploit a property of quantum mechanics called superposition, wherein a quantum particle is made to be in two distinct states at the same time, for example, aligned and anti-aligned with an external magnetic field. So the number of states simultaneously available to 3 qubits, for example, is 8 and this number grows exponential with the number of qubits: 2N states for N qubits. Crucially, the NIST simulator also can engineer a second quantum property called entanglement between the qubits, so that even physically well separated particles may be made tightly interconnected. Recent years have seen tremendous interest in quantum simulation; scientists worldwide are striving to build small-scale demonstrations.
However, these experiments have yet to fully involve more than 30 quantum particles, the threshold at which calculations become impossible on conventional computers. In contrast, the NIST simulator has extensive control over hundreds of qubits. This order of magnitude increase in qubit-number increases the simulator’s quantum state space exponentially. Just writing down on paper a state of a 350-qubit quantum simulator is impossible—it would require more than a googol of digits: 10 to the power of 100.
Over the past decade, the same NIST research group has conducted record-setting experiments in quantum computing,** atomic clocks and, now, quantum simulation. In contrast with quantum computers, which are universal devices that someday may solve a wide variety of computational problems, simulators are “special purpose” devices designed to provide insight about specific problems. This work was supported in part by the Defense Advanced Research Projects Agency. Co-authors from Georgetown University, North Carolina State University and in South Africa and Australia contributed to the research. As a nonregulatory agency of the U.S. Department of Commerce, NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life.
Further Reading
The NIST quantum simulator permits study of quantum systems that are difficult to study in the laboratory and impossible to model with a supercomputer. In this photograph of the crystal, the ions are fluorescing, indicating the qubits are all in the same state. Under the right experimental conditions, the ion crystal spontaneously forms this nearly perfect triangular lattice structure. Credit: Britton/NIST
Here is what NIST says about its quantum: Physicists at the National Institute of Standards and Technology (NIST) have built a quantum simulator that can engineer interactions among hundreds of quantum bits (qubits)—10 times more than previous devices. The simulator has passed a series of important benchmarking tests and The NIST quantum simulator permits study of quantum systems that are difficult to study in the laboratory and impossible to model with a supercomputer.
The heart of the simulator is a two-dimensional crystal of beryllium ions (blue spheres); the outermost electron of each ion is a quantum bit (qubit, red arrows). The ions are confined by a large magnetic field in a device called a Penning trap (not shown). Inside the trap the crystal rotates clockwise. scientists are poised to study problems in material science that are impossible to model on conventional computers.Credit: Britton/NIST
Many important problems in physics especially low-temperature physics—remain poorly understood because the underlying quantum mechanics is vastly complex. Conventional computers—even supercomputers—are inadequate for simulating quantum systems with as few as 30 particles. Better computational tools are needed to understand and rationally design materials, such as high-temperature superconductors, whose properties are believed to depend on the collective quantum behavior of hundreds of particles.
The NIST simulator consists of a tiny, single-plane crystal of hundreds of beryllium ions, less than 1 millimeter in diameter, hovering inside a device called a Penning trap. The outermost electron of each ion acts as a tiny quantum magnet and is used as a qubit—the quantum equivalent of a “1” or a “0” in a conventional computer. In the benchmarking experiment, physicists used laser beams to cool the ions to near absolute zero. Carefully timed microwave and laser pulses then caused the qubits to interact, mimicking the quantum behavior of materials otherwise very difficult to study in the laboratory. Although the two systems may outwardly appear dissimilar, their behavior is engineered to be mathematically identical. In this way, simulators allow researchers to vary parameters that couldn’t be changed in natural solids, such as atomic lattice spacing and geometry. In the NIST benchmarking experiments, the strength of the interactions was intentionally weak so that the simulation remained simple enough to be confirmed by a classical computer. Ongoing research uses much stronger interactions.
Simulators exploit a property of quantum mechanics called superposition, wherein a quantum particle is made to be in two distinct states at the same time, for example, aligned and anti-aligned with an external magnetic field. So the number of states simultaneously available to 3 qubits, for example, is 8 and this number grows exponential with the number of qubits: 2N states for N qubits. Crucially, the NIST simulator also can engineer a second quantum property called entanglement between the qubits, so that even physically well separated particles may be made tightly interconnected. Recent years have seen tremendous interest in quantum simulation; scientists worldwide are striving to build small-scale demonstrations.
However, these experiments have yet to fully involve more than 30 quantum particles, the threshold at which calculations become impossible on conventional computers. In contrast, the NIST simulator has extensive control over hundreds of qubits. This order of magnitude increase in qubit-number increases the simulator’s quantum state space exponentially. Just writing down on paper a state of a 350-qubit quantum simulator is impossible—it would require more than a googol of digits: 10 to the power of 100.
Over the past decade, the same NIST research group has conducted record-setting experiments in quantum computing,** atomic clocks and, now, quantum simulation. In contrast with quantum computers, which are universal devices that someday may solve a wide variety of computational problems, simulators are “special purpose” devices designed to provide insight about specific problems. This work was supported in part by the Defense Advanced Research Projects Agency. Co-authors from Georgetown University, North Carolina State University and in South Africa and Australia contributed to the research. As a nonregulatory agency of the U.S. Department of Commerce, NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life.
Further Reading
#CHIPS: "VoIP Processor Offers Unclonable Security"
The popularity of voice-over-Internet-protocol (VoIP) has prompted
Dialog Semiconductor to license the safe boot and unclonability features of Intrincis-ID. R. Colin Johnson
Here is what Intrisic-ID says about the new VoIP chip: The Dialog Semiconductor SC14453S is the world’s first commercially available Voice over IP (VoIP) processor circuit that integrates Intrinsic-ID’s patented Hardware Intrinsic Security IP – also referred to as Physical Unclonable Function (PUF).
In Hardware Intrinsic Security (HIS) technology a secret key is extracted like a silicon biometric or fingerprint from silicon hardware directly and only when required.
By using this HIS-based fingerprint a firm binding of software and hardware is possible, offering superior levels of anti-tampering and anti-cloning characteristics.
This approach can ensure that only authenticated software can run on the SC14453S platform. A message authentication tag for a bootloader or software image of a particular customer is securely stored with the HIS IP of Intrinsic-ID, without the need for embedded non-volatile memory.
FURTHER READING
Dialog Semiconductor to license the safe boot and unclonability features of Intrincis-ID. R. Colin Johnson
Here is what Intrisic-ID says about the new VoIP chip: The Dialog Semiconductor SC14453S is the world’s first commercially available Voice over IP (VoIP) processor circuit that integrates Intrinsic-ID’s patented Hardware Intrinsic Security IP – also referred to as Physical Unclonable Function (PUF).
In Hardware Intrinsic Security (HIS) technology a secret key is extracted like a silicon biometric or fingerprint from silicon hardware directly and only when required.
By using this HIS-based fingerprint a firm binding of software and hardware is possible, offering superior levels of anti-tampering and anti-cloning characteristics.
This approach can ensure that only authenticated software can run on the SC14453S platform. A message authentication tag for a bootloader or software image of a particular customer is securely stored with the HIS IP of Intrinsic-ID, without the need for embedded non-volatile memory.
FURTHER READING
#ALGORITHMS: "IBM'S Watson Conquering Expert Medical Diagnoses"
The use of artificial intelligence (AI) to quickly sift and sort through millions of medical records, report, journal articles and other resources is repurposing the gaming-oriented Waston AI for making instant diagnoses of diseases that traditionally require weeks of visits to specialist, with no guarantee of success. R. Colin Johnson
Here is what IBM says about its progress at repurposing Watson for medical diagnoses: Memorial Sloan-Kettering Cancer Center (MSKCC) and IBM have agreed to collaborate on the development of powerful solutions built upon IBM Watson in order to provide medical professionals with improved access to current and comprehensive cancer data and practices.
MSKCC oncologists will assist in developing IBM Watson oncology applications utilizing patient’s medical information and synthesize a vast array of continuously updated and vetted treatment guidelines, published research and insights gleaned from the deep experience of MSKCC clinicians. This will provide an individualized recommendation to physicians and provide users with a detailed record of the data and evidence used to reach the recommendations.
Oncologist will have access to relevant cancer care information in order to personalize diagnosis and treatment plans for their patients. Creating an outcome and evidence based decision support solution will help doctors create an individualized course of action for patients based on current evidence. Physicians will be able to confirm a hypothesis or seek alternative ideas based on confidence weighted responses.
In general, healthcare organizations are challenged with unlocking the vast and growing stores of patient information, clinical data, research, and market insights to make more informed decisions and improve patient outcomes. Medical data is doubling every five years and it is estimated that 80 percent of medical information resides in an unstructured data format like nurses notes, clinical reports and doctor dictated comments. One challenge is to provide a complete view of all the available data so that trends and patterns can be determined and acted upon.
Ready for Watson is a program designed to prepare and accelerate client development toward a Watson Solutions deployment. The program identifies steps to be taken along a progression path and the associated enabling technologies to consider. One such complimentary technology is IBM Content and Predictive Analytics for Healthcare which provides healthcare organizations a means to improve patient outcomes. Natural language processing is paired with predictive root cause analysis providing a unique forward looking view of the data.
In the case of Seton Health Care Family, they had more than two million patient contacts a year with nearly a dozen patient records per contact, some records were electronic, but many hand written notes. Clinicians required an aggregated view of the patient to drive informed decision making. Seton Health Care Family was able to mine the unstructured data by using natural language processing and search technologies to produce actionable results regarding re-admission risks for congestive heart failure patients.
IBM Content and Predictive Analytics for Healthcare empowered Seton Health Care Family to turn insight into action for improved patient care and built a critical foundation for the future work planned for a full Watson deployment.
Operational and clinical insights can be attained by following the path to Watson.
Further Reading
Here is what IBM says about its progress at repurposing Watson for medical diagnoses: Memorial Sloan-Kettering Cancer Center (MSKCC) and IBM have agreed to collaborate on the development of powerful solutions built upon IBM Watson in order to provide medical professionals with improved access to current and comprehensive cancer data and practices.
MSKCC oncologists will assist in developing IBM Watson oncology applications utilizing patient’s medical information and synthesize a vast array of continuously updated and vetted treatment guidelines, published research and insights gleaned from the deep experience of MSKCC clinicians. This will provide an individualized recommendation to physicians and provide users with a detailed record of the data and evidence used to reach the recommendations.
Oncologist will have access to relevant cancer care information in order to personalize diagnosis and treatment plans for their patients. Creating an outcome and evidence based decision support solution will help doctors create an individualized course of action for patients based on current evidence. Physicians will be able to confirm a hypothesis or seek alternative ideas based on confidence weighted responses.
In general, healthcare organizations are challenged with unlocking the vast and growing stores of patient information, clinical data, research, and market insights to make more informed decisions and improve patient outcomes. Medical data is doubling every five years and it is estimated that 80 percent of medical information resides in an unstructured data format like nurses notes, clinical reports and doctor dictated comments. One challenge is to provide a complete view of all the available data so that trends and patterns can be determined and acted upon.
Ready for Watson is a program designed to prepare and accelerate client development toward a Watson Solutions deployment. The program identifies steps to be taken along a progression path and the associated enabling technologies to consider. One such complimentary technology is IBM Content and Predictive Analytics for Healthcare which provides healthcare organizations a means to improve patient outcomes. Natural language processing is paired with predictive root cause analysis providing a unique forward looking view of the data.
In the case of Seton Health Care Family, they had more than two million patient contacts a year with nearly a dozen patient records per contact, some records were electronic, but many hand written notes. Clinicians required an aggregated view of the patient to drive informed decision making. Seton Health Care Family was able to mine the unstructured data by using natural language processing and search technologies to produce actionable results regarding re-admission risks for congestive heart failure patients.
IBM Content and Predictive Analytics for Healthcare empowered Seton Health Care Family to turn insight into action for improved patient care and built a critical foundation for the future work planned for a full Watson deployment.
Operational and clinical insights can be attained by following the path to Watson.
Further Reading
#CHIPS: "100-Gbit Ethernet Finds Single-Chip Solution"
Broadcom aims to spread 100 gigabit per second Ethernet everywhere with a single-chip solution for line card applications, rather than the half-dozen chips and field-programmable gate arrays (FPGAs) needed to implement 100-bit Ethernet today. R. Colin Johnson
Server Ethernet ports running at 100 gigabits per second will grow at a rate of 170 percent over the next four years. Source: Infonetics
Here is what EETimes says about 100Gbit Ethernet: Broadcom Corp. announced its fourth-generation Ethernet network processor, which it claims is the industry's first chip to use massive parallelism by virtue of its 64 packet-processing cores running at one gigahertz. Providing full-duplex 100Gbit per second performance, it can also be configured to provide a dozen 10-Gbit channels.
Further Reading
Server Ethernet ports running at 100 gigabits per second will grow at a rate of 170 percent over the next four years. Source: Infonetics
Here is what EETimes says about 100Gbit Ethernet: Broadcom Corp. announced its fourth-generation Ethernet network processor, which it claims is the industry's first chip to use massive parallelism by virtue of its 64 packet-processing cores running at one gigahertz. Providing full-duplex 100Gbit per second performance, it can also be configured to provide a dozen 10-Gbit channels.
Further Reading
Tuesday, April 24, 2012
#CHIPS: "3-D FPGAs enable silicon convergence"
Altera has already started adding MPUs, DSPs, ASICs, ASSPs to its FPGAs making silicon convergence a fait accompli, but Waters point here is that with silicon interposers this ability now has a 3-D platform to take it mainstream. R. Colin Johnson
The three-dimensional (3-D) field-programmable gate array (FPGA) is enabling the era of silicon convergence, according to Altera Corp. (San Jose, Calif.), which is incorporating application specific integrated circuits (ASICs), application-specific standard products (ASSPs), digital signal processors (DSPs) and micro-processor units (MPUs).
Further Reading
The three-dimensional (3-D) field-programmable gate array (FPGA) is enabling the era of silicon convergence, according to Altera Corp. (San Jose, Calif.), which is incorporating application specific integrated circuits (ASICs), application-specific standard products (ASSPs), digital signal processors (DSPs) and micro-processor units (MPUs).
Further Reading
#CHIPS: "Tensilica aims at Chinese audio"
Intellectual property (IP) specialists Tensilica is currently responsible for hundreds of millions of device serving up high-fidelity audio for voice and music in handsets and other mobile devices, By adding the emerging Chinese audio standards to is entire line of audio IP cores, Tensilica hopes to attain billion unit shipments--tripling its market penetration in just two years. R. Colin Johnson
Audio chip growth at 300 million today at Tensilica is predicted to top 1 billion by 2014, with total market growing to 10 billion by 2016. Source: Tensilica
Tensilica Inc. (Santa Clara, Calif.) is banking its future growth on the world's largest electronics market by adding support for China's Dynamic Resolution Adaptation (DRA) standard to its entire library of over 100 audio encoders, decoders, an sound enhancement chips for adding high-fidelity audio digital signal processors (DSPs) to mobile handsets.
Tesilica predicts it will top the billion unit mark in shipments of its high-fidelity DSP audio chips into mobile phone market by 2014, up from 300 million units today. Tensilica's software partners to address this market include Dolby, Acoustic Technologies, DTS, SRS, QSound, Sensor, Audyssey, AM3D and Arkamys.
Further Reading
Audio chip growth at 300 million today at Tensilica is predicted to top 1 billion by 2014, with total market growing to 10 billion by 2016. Source: Tensilica
Tensilica Inc. (Santa Clara, Calif.) is banking its future growth on the world's largest electronics market by adding support for China's Dynamic Resolution Adaptation (DRA) standard to its entire library of over 100 audio encoders, decoders, an sound enhancement chips for adding high-fidelity audio digital signal processors (DSPs) to mobile handsets.
Tesilica predicts it will top the billion unit mark in shipments of its high-fidelity DSP audio chips into mobile phone market by 2014, up from 300 million units today. Tensilica's software partners to address this market include Dolby, Acoustic Technologies, DTS, SRS, QSound, Sensor, Audyssey, AM3D and Arkamys.
Further Reading
#CHIPS: "Freescale ups ante with quad-core Qorivva"
Automotive microcontrollers are just as safety conscious as the Space Shuttle, since human lives are at risk, which is why Freescale adopted a similar strategy to have a completely redundant core whose results are compared with the main core at each time step, allowing errors to be corrected before they cause loss of life. R. Colin Johnson
Quad-core Qorivva MPC5746M microcontroller features a redundant checker-core (brown at top) which performs critical calculations one-step behind main core to ensure errors are caught and corrected (click on image to enlarge).
Boosting the performance of its automotive microcontrollers—without increasing power consumption—is the aim of the new quad-core processor announced Tuesday (April 24) by Freescale Semiconductor Inc. at the 2012 Society of Automobile Engineers World Congress in Detroit.
Further Reading
Quad-core Qorivva MPC5746M microcontroller features a redundant checker-core (brown at top) which performs critical calculations one-step behind main core to ensure errors are caught and corrected (click on image to enlarge).
Boosting the performance of its automotive microcontrollers—without increasing power consumption—is the aim of the new quad-core processor announced Tuesday (April 24) by Freescale Semiconductor Inc. at the 2012 Society of Automobile Engineers World Congress in Detroit.
Further Reading
#3D: "Nano Self-Assembly Enables Precision Manufacturing"
In the near future, complete 3D product manufacturing will be performed by smart self-assembling nanoscale building blocks, according to these National Science Foundation (NSF) researchers. R. Colin Johnson
3D micro-printing and of two-dimensional sheets of polymers can fold into three-dimensional shapes when water is added. Technique uses a photomask and ultraviolet (UV) light to "print" a pattern. In the absence of UV exposure, the polymer will swell and expand uniformly when exposed to water; however, when polymer molecules within the sheet were exposed to UV light, they became crosslinked--more rigidly linked together at a number of points. Credit: Zina Deretsky, National Science Foundation
Here is what NSF says about their 3D self-assembling manufacturing research: While it is relatively straightforward to build a box on the macroscale, it is much more challenging at smaller micro- and nanometer length scales. At those sizes, three-dimensional (3-D) structures are too small to be assembled by any machine and they must be guided to assemble on their own. And now, interdisciplinary research by engineers at Johns Hopkins University in Baltimore, Md., and mathematicians at Brown University in Providence, R.I., has led to a breakthrough showing that higher order polyhedra can indeed fold up and assemble themselves.
New York University (NYU) researchers have demonstrated an ability to make new materials with empty space on the inside, an advancement that could potentially control desired and unwanted chemical reactions. Mike Ward, of NYU's department of chemistry, and a team of researchers created molecular "flasks," which are essentially self-assembling cage-like containers capable of housing other compounds inside them. These flasks may eventually allow researchers to isolate certain chemical reactions within or outside the flask. The molecular flasks described by Ward and his collaborators take the shape of a truncated octahedron, one of 13 shapes described as an Archimedean solid--discovered by the Greek mathematician Archimedes. Archimedean solids are characterized by a specific number of sides that meet at corners which are all identical. The regularity of these shapes often means they are of particular interest to chemists and materials researchers looking to create complex materials that assemble themselves. Credit: Michael D. Ward, New York University
With support from the National Science Foundation (NSF), David Gracias and Govind Menon, a mathematician at Brown University, are developing self-assembling 3-D micro- and nanostructures that can be used in a number of applications, including medicine.
Menon's team at Brown began designing these tiny 3-D structures by first flattening them out. They worked with a number of shapes, such as 12-sided interconnected panels, which can potentially fold into a dodecahedron shaped container. "Imagine cutting it up and flattening out the faces as you go along," says Menon. "It's a two-dimensional unfolding of the polyhedron."
And not all flat shapes are created equal; some fold better than others. "The best ones are the ones which are most compact. There are 43,380 ways to fold a dodecahedron," notes Menon.
The researchers developed an algorithm to sift through all of the possible choices, narrowing the field to a few compact shapes that easily fold into 3-D structures. Menon's team sent those designs to Gracias and his team at Johns Hopkins who built the shapes, and validated the hypothesis.
Imagine thousands of precisely structured, tiny, biodegradable, boxes rushing through the bloodstream en route to a sick organ. Once they arrive at their destination, they can release medicine with pinpoint accuracy. That's the vision for the future. For now, the more immediate concern is getting the design of the structures just right so that they can be manufactured with high yields.
Further Reading
3D micro-printing and of two-dimensional sheets of polymers can fold into three-dimensional shapes when water is added. Technique uses a photomask and ultraviolet (UV) light to "print" a pattern. In the absence of UV exposure, the polymer will swell and expand uniformly when exposed to water; however, when polymer molecules within the sheet were exposed to UV light, they became crosslinked--more rigidly linked together at a number of points. Credit: Zina Deretsky, National Science Foundation
Here is what NSF says about their 3D self-assembling manufacturing research: While it is relatively straightforward to build a box on the macroscale, it is much more challenging at smaller micro- and nanometer length scales. At those sizes, three-dimensional (3-D) structures are too small to be assembled by any machine and they must be guided to assemble on their own. And now, interdisciplinary research by engineers at Johns Hopkins University in Baltimore, Md., and mathematicians at Brown University in Providence, R.I., has led to a breakthrough showing that higher order polyhedra can indeed fold up and assemble themselves.
New York University (NYU) researchers have demonstrated an ability to make new materials with empty space on the inside, an advancement that could potentially control desired and unwanted chemical reactions. Mike Ward, of NYU's department of chemistry, and a team of researchers created molecular "flasks," which are essentially self-assembling cage-like containers capable of housing other compounds inside them. These flasks may eventually allow researchers to isolate certain chemical reactions within or outside the flask. The molecular flasks described by Ward and his collaborators take the shape of a truncated octahedron, one of 13 shapes described as an Archimedean solid--discovered by the Greek mathematician Archimedes. Archimedean solids are characterized by a specific number of sides that meet at corners which are all identical. The regularity of these shapes often means they are of particular interest to chemists and materials researchers looking to create complex materials that assemble themselves. Credit: Michael D. Ward, New York University
With support from the National Science Foundation (NSF), David Gracias and Govind Menon, a mathematician at Brown University, are developing self-assembling 3-D micro- and nanostructures that can be used in a number of applications, including medicine.
Menon's team at Brown began designing these tiny 3-D structures by first flattening them out. They worked with a number of shapes, such as 12-sided interconnected panels, which can potentially fold into a dodecahedron shaped container. "Imagine cutting it up and flattening out the faces as you go along," says Menon. "It's a two-dimensional unfolding of the polyhedron."
And not all flat shapes are created equal; some fold better than others. "The best ones are the ones which are most compact. There are 43,380 ways to fold a dodecahedron," notes Menon.
The researchers developed an algorithm to sift through all of the possible choices, narrowing the field to a few compact shapes that easily fold into 3-D structures. Menon's team sent those designs to Gracias and his team at Johns Hopkins who built the shapes, and validated the hypothesis.
Imagine thousands of precisely structured, tiny, biodegradable, boxes rushing through the bloodstream en route to a sick organ. Once they arrive at their destination, they can release medicine with pinpoint accuracy. That's the vision for the future. For now, the more immediate concern is getting the design of the structures just right so that they can be manufactured with high yields.
Further Reading
Monday, April 23, 2012
#MEMS: "Movea Unveils Rapid Prototyping Motion Processing"
Movea has enabled smart televisions worldwide by developing algorithms for their smart remote controls which can be used as pointers, as well as pioneering other motion processing applications using MEMS inertial sensors. A new rapid prototyping tool that assembles its smart motion "atoms" to algorithmic "molecules" of motion--called its Chemistry of Motion initiative--is claimed by Movea to allow it to more quickly develop future motion processing algorithms for its customers. R. Colin Johnson
Movea has a defined its motion atoms on a periodic table, which is divided into different types like the periodic table of elements.
Here is what Movea says about how its motion atoms can be assembled into molecules in the Chemistry of Motion: Movea today revealed its Chemistry of Motion enables the company to rapidly prototype new motion features for customers in the mobile, Interactive TV, sports, and health markets. Built on Movea’s patented SmartMotion “atoms”, Chemistry of Motion builds on hundreds of years of R&D which has identified the fundamental elements of human motion and tools for combining SmartMotion atoms into molecules of more complex, end-user features.
Movea’s Chemistry of Motion characterizes and organizes the basic elements of human motion and assembles them into “molecules” which represent the richer, more complex end-user features that the market is increasingly demanding. In Movea’s Table of SmartMotion Elements, basic features are organized into columns according to the type of motion analysis they perform. Each element in the table is characterized by fundamental properties such as category of motion, computational complexity, sensor configuration and sensor placement.
The creation of new features by assembling motion atoms into molecules is accelerated through a powerful internal toolkit the company’s engineers have developed called MoveaLab. The company emphasizes that MoveaLab is only an internal tool used by Movea Engineering, however Movea customers benefit by reduced development risk and reduced Time-to-Market for advanced capabilities.
Further Reading
Movea has a defined its motion atoms on a periodic table, which is divided into different types like the periodic table of elements.
Here is what Movea says about how its motion atoms can be assembled into molecules in the Chemistry of Motion: Movea today revealed its Chemistry of Motion enables the company to rapidly prototype new motion features for customers in the mobile, Interactive TV, sports, and health markets. Built on Movea’s patented SmartMotion “atoms”, Chemistry of Motion builds on hundreds of years of R&D which has identified the fundamental elements of human motion and tools for combining SmartMotion atoms into molecules of more complex, end-user features.
Movea’s Chemistry of Motion characterizes and organizes the basic elements of human motion and assembles them into “molecules” which represent the richer, more complex end-user features that the market is increasingly demanding. In Movea’s Table of SmartMotion Elements, basic features are organized into columns according to the type of motion analysis they perform. Each element in the table is characterized by fundamental properties such as category of motion, computational complexity, sensor configuration and sensor placement.
The creation of new features by assembling motion atoms into molecules is accelerated through a powerful internal toolkit the company’s engineers have developed called MoveaLab. The company emphasizes that MoveaLab is only an internal tool used by Movea Engineering, however Movea customers benefit by reduced development risk and reduced Time-to-Market for advanced capabilities.
Further Reading
#ALGORITHMS: "Wafer inspection gets smart"
By adding intelligent parallel processing, KLA-Tencor Corp. (Milpitas, Calif.) has increased the throughput of its wafer inspection and metrology tool by as much as four times.
KLA-Tencor's Concurrent Inspection and Reveiw CLuster (CIRCL) can detected front-side defects (right) then trigger back-side detection (left) to find assciated defects (circled).
Called CIRCL for Concurrent Inspection and Review CLuster, the lithography review and outgoing quality control (OQC) system uses embedded intelligence to monitor the front side and if defects are found, also monitor the back side and edge of a wafer for defects too, while simultaneously measuring the wafer edge profile, edge bead concentricity and macro overlay error. In addition, two lots can be simultaneously monitored in parallel, potentially quadrupling overall throughput.
Further Reading
KLA-Tencor's Concurrent Inspection and Reveiw CLuster (CIRCL) can detected front-side defects (right) then trigger back-side detection (left) to find assciated defects (circled).
Called CIRCL for Concurrent Inspection and Review CLuster, the lithography review and outgoing quality control (OQC) system uses embedded intelligence to monitor the front side and if defects are found, also monitor the back side and edge of a wafer for defects too, while simultaneously measuring the wafer edge profile, edge bead concentricity and macro overlay error. In addition, two lots can be simultaneously monitored in parallel, potentially quadrupling overall throughput.
Further Reading
#ALGORITHMS: "Startup Claims 'Holy Grail' of SoC Design"
Microchip design has traditionally required a good idea plus a staff of engineers skilled at translating ideas into transistor diagrams that need to be input to an electronic design automation (EDA) workstations, then validated, tested and debugged. The Holy Grail here would be a good idea cast into a C-program that is then automatically converted into a chip--just the business plan of this new startup. R. Colin Johnson
Algotochip starts with designers C-code (left) the generates an application-specific programmable microcontroller (top), and digital signal processor (DSP) along with a memory management unit (MMU) and input/output which implements an SoC.
The first automated software-to-chip dream came out of the closet Monday (April 23), when Algotochip Corp. (Sunnyvale, Calif.) claimed to be able to produce a system-on-chip (SoC) design from a C-code specification in just eight to 16 weeks.
Further Reading
Algotochip starts with designers C-code (left) the generates an application-specific programmable microcontroller (top), and digital signal processor (DSP) along with a memory management unit (MMU) and input/output which implements an SoC.
The first automated software-to-chip dream came out of the closet Monday (April 23), when Algotochip Corp. (Sunnyvale, Calif.) claimed to be able to produce a system-on-chip (SoC) design from a C-code specification in just eight to 16 weeks.
Further Reading
#3D: "Stereoscopic Glasses-free Smartphones Enter 2nd Generation"
The second generation of 3D stereoscopic smartphones were unveiled in Europe today featuring a pushbutton 2D-to-3D converter and other innovations such as the ability to create 3D avatars from a user's own photos. R. Colin Johnson
Here is what LG says about its second generation glasses-free 3D smartphone: LG’s latest achievement in the glasses-free 3D space -- the Optimus 3D Max -- will kick-off its global roll-out today starting in Europe. As first seen at Mobile World Congress 2012, the second-generation 3D smartphone boasts an enhanced chipset and more enticing 3D entertainment features in a slimmer and lighter body.
The Optimus 3D Max now includes a new 3D Converter which allows for a greater va-riety of 3D content as it converts 2D content from Google Earth, Google Maps and oth-er mapping apps into 3D. Visitors at MWC 2012 also raved about the device’s unique 3D video editor which allows the editing of 3D video on the phone in real time. And the 3D Hot Key mounted on the side of the phone enables users to easily toggle between 2D and 3D. The Optimus 3D Max includes 3D-style cubicle icons in addition to its customizable icons which can be amended by applying the users’ own photos through the Icon Customizer feature.
Additional features, which will be available through an upcoming maintenance release (MR), include a HD Converter to offer high resolution content to be viewed on a TV connected through MHL (Mobile High-Definition Link) and Range Finder, which cal-culates the distance between the camera and a subject as well as the dimensions of an object through triangulation.
As for its new form-factor, the Optimus 3D Max is 2mm slimmer and 20g lighter than its predecessor, measuring only 9.6 mm thin and weighing 148g. The 5MP camera on the rear captures both photos and video in 3D using its dual lenses. The recorded material can be viewed directly on the smartphone in glasses-free 3D or on a 3D capable computer monitor or TV.
Further Reading
Here is what LG says about its second generation glasses-free 3D smartphone: LG’s latest achievement in the glasses-free 3D space -- the Optimus 3D Max -- will kick-off its global roll-out today starting in Europe. As first seen at Mobile World Congress 2012, the second-generation 3D smartphone boasts an enhanced chipset and more enticing 3D entertainment features in a slimmer and lighter body.
The Optimus 3D Max now includes a new 3D Converter which allows for a greater va-riety of 3D content as it converts 2D content from Google Earth, Google Maps and oth-er mapping apps into 3D. Visitors at MWC 2012 also raved about the device’s unique 3D video editor which allows the editing of 3D video on the phone in real time. And the 3D Hot Key mounted on the side of the phone enables users to easily toggle between 2D and 3D. The Optimus 3D Max includes 3D-style cubicle icons in addition to its customizable icons which can be amended by applying the users’ own photos through the Icon Customizer feature.
Additional features, which will be available through an upcoming maintenance release (MR), include a HD Converter to offer high resolution content to be viewed on a TV connected through MHL (Mobile High-Definition Link) and Range Finder, which cal-culates the distance between the camera and a subject as well as the dimensions of an object through triangulation.
As for its new form-factor, the Optimus 3D Max is 2mm slimmer and 20g lighter than its predecessor, measuring only 9.6 mm thin and weighing 148g. The 5MP camera on the rear captures both photos and video in 3D using its dual lenses. The recorded material can be viewed directly on the smartphone in glasses-free 3D or on a 3D capable computer monitor or TV.
Further Reading
#MARKETS: "Global Press Electronics Summit Celebrates 10th Anniversary"
The annual Global Press Electronics Summit 2012 this year hosts 22 electronics companies presenting their latest innovations to 57 media editors from 17 countries on its 10th anniversary. R. Colin Johnson
Featured speakers this year include: Tensilica's Chief Technology Officer (CTO) Chris Rowen, pioneer in the development of reduced instruction set computing (RISC) at Stanford University and MIPS Computer Systems, Warren Savage who pioneered intellectual property (IP) development at Fairchild Semiconductor, Tandem Computers, and Synopsys and is now chief executive officer (CEO) of IP Extreme, WiSpry founder Jeff Hilbert whose company is currently pioneering adaptive antenna's for cell phones using MEMS capacitor arrays manufactured by its foundry partner IBM, Rajesh Vashist the latest CEO of SiTime after MEMS pioneer Kurt Peterson left the company, Satish Padmanabhan CTO and founder of Algotochip whose EDA tool directly implements digital chips from C-algorithms in as little as eight weeks, John Kispert CEO at memory chip giant at Spansion, and Ravi Subramanian, CEO of Berkeley Design Automation who is well known for developing first-generation DSP-based platforms for digital mobile phones.
Further Reading
Featured speakers this year include: Tensilica's Chief Technology Officer (CTO) Chris Rowen, pioneer in the development of reduced instruction set computing (RISC) at Stanford University and MIPS Computer Systems, Warren Savage who pioneered intellectual property (IP) development at Fairchild Semiconductor, Tandem Computers, and Synopsys and is now chief executive officer (CEO) of IP Extreme, WiSpry founder Jeff Hilbert whose company is currently pioneering adaptive antenna's for cell phones using MEMS capacitor arrays manufactured by its foundry partner IBM, Rajesh Vashist the latest CEO of SiTime after MEMS pioneer Kurt Peterson left the company, Satish Padmanabhan CTO and founder of Algotochip whose EDA tool directly implements digital chips from C-algorithms in as little as eight weeks, John Kispert CEO at memory chip giant at Spansion, and Ravi Subramanian, CEO of Berkeley Design Automation who is well known for developing first-generation DSP-based platforms for digital mobile phones.
Further Reading
Sunday, April 22, 2012
#MATERIALS: "IBM Conquers Carbon-Based Optical Chip Making"
Super-light weight, ultra-low power, hybrid electrical/optical microchips based on the crystalline carbon--graphene--has now been demonstrated in IBM's research labs. Now the era of graphene microelectronics appears to all but be assured. R. Colin Johnson
Scanning Electron Microscope image of, five-layer graphene/insulator superlattice array of two-micron diameter microdisks (purple).
Graphene has been courted as the miracle material of the future, since different formulations have been fabricated into conductors, semiconductors and insulators. Now IBM has added photonic to the list by demonstrating a graphene/insulator superlattice that achieves a terahertz frequency notch filter and a linear polarizer, devices which could be useful in future mid- and far-infrared photonic devices, including detectors, modulators and three-dimensional metamaterials.
Further Reading
Scanning Electron Microscope image of, five-layer graphene/insulator superlattice array of two-micron diameter microdisks (purple).
Graphene has been courted as the miracle material of the future, since different formulations have been fabricated into conductors, semiconductors and insulators. Now IBM has added photonic to the list by demonstrating a graphene/insulator superlattice that achieves a terahertz frequency notch filter and a linear polarizer, devices which could be useful in future mid- and far-infrared photonic devices, including detectors, modulators and three-dimensional metamaterials.
Further Reading
Friday, April 20, 2012
#SPACE: "Moonbuggy Winners for 2012 Announced"
Inspired by the original lunar rover which first piloted across the moon's surface in the early 1970s during the Apollo mission, NASA recently announced the winners of 19th Annual NASA Great Moonbuggy Race. R. Colin Johnson
The team from the University of Alabama in Huntsville took top prize in the college division of NASA's Great Moonbuggy Race with a time of 4 minutes and 3 seconds. (NASA/MSFC/Emmett Given)
Here is what NASA says about the race: America's space agency today crowned its vehicular engineering victors at the close of the 19th annual NASA Great Moonbuggy Race at the U.S. Space & Rocket Center in Huntsville, Ala. The team from Petra Mercado High School in Humacao, Puerto Rico won first place in the high school division; racers from the University of Alabama in Huntsville Team 1, claimed the college-division trophy.
The winning teams outraced more than 80 teams from 20 states, Puerto Rico, Canada, Germany, India, Italy, Russia and the United Arab Emirates. Approximately 600 student drivers, engineers and mechanics -- plus their team advisors and cheering sections -- gathered April 13-14 for the harrowing "space race."
Organized by NASA's Marshall Space Flight Center in Huntsville, the race challenges students to design, build and race lightweight, human-powered buggies. Traversing the grueling half-mile course, which simulates the cratered lunar surface, race teams face many of the same engineering challenges dealt with by Apollo-era lunar rover developers at the Marshall Center in the late 1960s. The winning teams post the fastest vehicle assembly and race times in their divisions, with the fewest on-course penalties.
The team from Petra Mercado, in its second year in the competition, finished the half-mile course in 3 minutes and 20 seconds. UA Huntsville brought home another win, finishing in 4 minutes and 3 seconds.
Finishing in second place this year in the high school division was Colegio Nuestra Senora del Perpetuo Socorro in Humacao, Puerto Rico. In third place was Arab High School Team 1 from Arab, Ala.
University of Puerto Rico at Humacao won second place in the college division; and Purdue University Calumet Team 1 from Hammond, Ind., took home third place.
Race organizers presented both first-place winners with trophies depicting NASA's original lunar rover. NASA also gave plaques and certificates to every competing team. Sponsor Lockheed Martin Corp. of Huntsville presented the first-place high school and college teams with cash awards of $2,850 each.
Individuals on the winning teams also received commemorative medals and other prizes. (For a complete list of additional awards for design, most improved and spirit, see below.)
The race is inspired by the original lunar rover, first piloted across the moon's surface in the early 1970s during the Apollo 15, 16 and 17 missions. Eight college teams participated in the first NASA Great Moonbuggy Race in 1994. The race was expanded in 1996 to include high school teams, and student participation has swelled each year since.
NASA's Great Moonbuggy Race has been hosted by the U.S. Space & Rocket Center since 1996. The race is sponsored by NASA's Human Exploration & Operations Mission Directorate in Washington. Major corporate sponsors are Lockheed Martin Corp., The Boeing Company, Northrop Grumman Corp. and Jacobs Engineering ESTS Group, all with operations in Huntsville.
Further Reading
The team from the University of Alabama in Huntsville took top prize in the college division of NASA's Great Moonbuggy Race with a time of 4 minutes and 3 seconds. (NASA/MSFC/Emmett Given)
Here is what NASA says about the race: America's space agency today crowned its vehicular engineering victors at the close of the 19th annual NASA Great Moonbuggy Race at the U.S. Space & Rocket Center in Huntsville, Ala. The team from Petra Mercado High School in Humacao, Puerto Rico won first place in the high school division; racers from the University of Alabama in Huntsville Team 1, claimed the college-division trophy.
The winning teams outraced more than 80 teams from 20 states, Puerto Rico, Canada, Germany, India, Italy, Russia and the United Arab Emirates. Approximately 600 student drivers, engineers and mechanics -- plus their team advisors and cheering sections -- gathered April 13-14 for the harrowing "space race."
Organized by NASA's Marshall Space Flight Center in Huntsville, the race challenges students to design, build and race lightweight, human-powered buggies. Traversing the grueling half-mile course, which simulates the cratered lunar surface, race teams face many of the same engineering challenges dealt with by Apollo-era lunar rover developers at the Marshall Center in the late 1960s. The winning teams post the fastest vehicle assembly and race times in their divisions, with the fewest on-course penalties.
The team from Petra Mercado, in its second year in the competition, finished the half-mile course in 3 minutes and 20 seconds. UA Huntsville brought home another win, finishing in 4 minutes and 3 seconds.
Finishing in second place this year in the high school division was Colegio Nuestra Senora del Perpetuo Socorro in Humacao, Puerto Rico. In third place was Arab High School Team 1 from Arab, Ala.
University of Puerto Rico at Humacao won second place in the college division; and Purdue University Calumet Team 1 from Hammond, Ind., took home third place.
Race organizers presented both first-place winners with trophies depicting NASA's original lunar rover. NASA also gave plaques and certificates to every competing team. Sponsor Lockheed Martin Corp. of Huntsville presented the first-place high school and college teams with cash awards of $2,850 each.
Individuals on the winning teams also received commemorative medals and other prizes. (For a complete list of additional awards for design, most improved and spirit, see below.)
The race is inspired by the original lunar rover, first piloted across the moon's surface in the early 1970s during the Apollo 15, 16 and 17 missions. Eight college teams participated in the first NASA Great Moonbuggy Race in 1994. The race was expanded in 1996 to include high school teams, and student participation has swelled each year since.
NASA's Great Moonbuggy Race has been hosted by the U.S. Space & Rocket Center since 1996. The race is sponsored by NASA's Human Exploration & Operations Mission Directorate in Washington. Major corporate sponsors are Lockheed Martin Corp., The Boeing Company, Northrop Grumman Corp. and Jacobs Engineering ESTS Group, all with operations in Huntsville.
Further Reading
#SPACE: "NASA Webb Spinoffs Benefit Private Sector"
NASA's Webb telescope will not be launched un 2018, but already spinoff technologies are benefiting the materials, medical and aerospace industries. The high precision needed to cast and measure the accuracy of its gigantic mirrors and ultra-sensitive detectors are already being commercially deployed long before the telescope becomes operational. R. Colin Johnson
Artist conception of the James Webb Space Telescope. Credit: NASA
Here is what NASA says about Webb Telescope spinoffs: Much of the technology for the Webb had to be conceived, designed and built specifically to enable it to see farther back in time. As with many NASA technological advances, some of the innovations are being used to benefit humankind in many other industries.
The Webb telescope is the world's next-generation space observatory and successor to the Hubble Space Telescope. The most powerful space telescope ever built, the Webb telescope will provide images of the first galaxies ever formed, and explore planets around distant stars. It is a joint project of NASA, the European Space Agency and the Canadian Space Agency.
New technologies developed for NASA's James Webb Space Telescope have already been adapted and applied to commercial applications in various industries including optics, aerospace, astronomy, medical and materials. Some of these technologies can be explored for use and licensed through NASA's Office of the Chief Technologist at NASA's Goddard Space Flight Center, Greenbelt, Md.
Optics Industry: Telescopes, Cameras and More
The optics industry has been the beneficiary of a new stitching technique that is an improved method for measuring large aspheres. An asphere is a lens whose surface profiles are not portions of a sphere or cylinder. In photography, a lens assembly that includes an aspheric element is often called an aspherical lens. Stitching is a method of combining several measurements of a surface into a single measurement by digitally combining the data as though it has been "stitched" together.
Because NASA depends on the fabrication and testing of large, high-quality aspheric (nonspherical) optics for applications like the James Webb Space Telescope, it sought an improved method for measuring large aspheres. Through Small Business Innovation Research (SBIR) awards from NASA's Goddard Space Flight Center, QED Technologies, of Rochester, New York, upgraded and enhanced its stitching technology for aspheres. QED developed the SSI-A, which earned the company an "R&D 100" award, and also developed a breakthrough machine tool called the aspheric stitching interferometer. The equipment is applied to advanced optics in telescopes, microscopes, cameras, medical scopes, binoculars, and photolithography.
Aerospace and Astronomy:
In the aerospace and astronomy industries, the Webb program gave 4D Technology its first commercial contract to develop the PhaseCam Interferometer system, which measures the quality of the Webb telescope's mirror segments in a cryogenic vacuum environment. This is a new way of using interferometers in the aerospace sector.
An interferometer is a device that separates a beam of light into two beams, usually by means of reflection, and then brings the beams together to produce interference, which is used to measure wavelength, index of refraction, and also distances.
Interferometry involves the collection of electromagnetic radiation using two or more collectors separated by some distance to produce a sharper image than each telescope could achieve separately.
The PhaseCam interferometer verified that the surfaces of the Webb telescope's mirror segments were as close to perfect as possible, and that they will remain that way in the cold vacuum of space. To test the Webb mirror segments, they were placed in a "cryovac" environment, where air is removed by a vacuum pump and temperatures are dropped to the extreme cold of deep space that the space craft will experience. A new dynamic interferometric technique with very short exposures that are not smeared by vibration was necessary to perform these measurements to the accuracy required, particularly in the high-vibration environment caused by the vacuum chamber's pumps.
The interferometer resulting from this NASA partnership can be used to evaluate future mirrors that need to be tested in vacuum chambers where vibration is a problem.
Medical Industry: Eye Health
New "wavefront" optical measurement devices and techniques were created for making the Webb telescope mirrors. Those have led to spinoffs in the medical industry where precise measurements are critical in eye health, for example.
The technology came about to accurately measure the James Webb Space Telescope primary mirror segments during manufacturing. Scientists at AMO WaveFront Sciences, LLC of Albuquerque, N.M. developed a new "wavefront" measurement device called a Scanning Shack Hartmann Sensor.
The optical measuring technology developed for the Webb, called "wavefront sensing" has been applied to the measurement of the human eye and allowed for significant improvements. "The Webb telescope program has enabled a number of improvements in measurement of human eyes, diagnosis of ocular diseases and potentially improved surgery," said Dan Neal, Director of Research and Development Abbott Medical Optics Inc. in Albuquerque, N.M. The Webb improvements have enabled eye doctors to get much more detailed information about the shape and "topography" of the eye in seconds rather than hours.
Materials Industry: Measuring Strength
Webb technologies have opened the door to better measurement in testing the strength of composite materials. Measuring strain in composite materials is the same as measuring how much they change in certain environments. Measuring step heights allows one to understand very small changes in a surface profile and doing all of this at high speed allows the device to work even in the presence of vibration that would normally blur the results.
Webb telescope technologies have also been beneficial to the economy. The technologies have enabled private sector companies such as 4D to generate significant revenue and create high-skill jobs. Much of 4D's growth from a two man start-up to over 35 people can be traced to projects originally developed for the telescope. 4D has also been able to adapt these technologies for a wide range of applications within the astronomy, aerospace, semiconductor and medical industries.
In the future, other industries may benefit from other Webb telescope technologies.
Further Reading
Artist conception of the James Webb Space Telescope. Credit: NASA
Here is what NASA says about Webb Telescope spinoffs: Much of the technology for the Webb had to be conceived, designed and built specifically to enable it to see farther back in time. As with many NASA technological advances, some of the innovations are being used to benefit humankind in many other industries.
The Webb telescope is the world's next-generation space observatory and successor to the Hubble Space Telescope. The most powerful space telescope ever built, the Webb telescope will provide images of the first galaxies ever formed, and explore planets around distant stars. It is a joint project of NASA, the European Space Agency and the Canadian Space Agency.
New technologies developed for NASA's James Webb Space Telescope have already been adapted and applied to commercial applications in various industries including optics, aerospace, astronomy, medical and materials. Some of these technologies can be explored for use and licensed through NASA's Office of the Chief Technologist at NASA's Goddard Space Flight Center, Greenbelt, Md.
Optics Industry: Telescopes, Cameras and More
The optics industry has been the beneficiary of a new stitching technique that is an improved method for measuring large aspheres. An asphere is a lens whose surface profiles are not portions of a sphere or cylinder. In photography, a lens assembly that includes an aspheric element is often called an aspherical lens. Stitching is a method of combining several measurements of a surface into a single measurement by digitally combining the data as though it has been "stitched" together.
Because NASA depends on the fabrication and testing of large, high-quality aspheric (nonspherical) optics for applications like the James Webb Space Telescope, it sought an improved method for measuring large aspheres. Through Small Business Innovation Research (SBIR) awards from NASA's Goddard Space Flight Center, QED Technologies, of Rochester, New York, upgraded and enhanced its stitching technology for aspheres. QED developed the SSI-A, which earned the company an "R&D 100" award, and also developed a breakthrough machine tool called the aspheric stitching interferometer. The equipment is applied to advanced optics in telescopes, microscopes, cameras, medical scopes, binoculars, and photolithography.
Aerospace and Astronomy:
In the aerospace and astronomy industries, the Webb program gave 4D Technology its first commercial contract to develop the PhaseCam Interferometer system, which measures the quality of the Webb telescope's mirror segments in a cryogenic vacuum environment. This is a new way of using interferometers in the aerospace sector.
An interferometer is a device that separates a beam of light into two beams, usually by means of reflection, and then brings the beams together to produce interference, which is used to measure wavelength, index of refraction, and also distances.
Interferometry involves the collection of electromagnetic radiation using two or more collectors separated by some distance to produce a sharper image than each telescope could achieve separately.
The PhaseCam interferometer verified that the surfaces of the Webb telescope's mirror segments were as close to perfect as possible, and that they will remain that way in the cold vacuum of space. To test the Webb mirror segments, they were placed in a "cryovac" environment, where air is removed by a vacuum pump and temperatures are dropped to the extreme cold of deep space that the space craft will experience. A new dynamic interferometric technique with very short exposures that are not smeared by vibration was necessary to perform these measurements to the accuracy required, particularly in the high-vibration environment caused by the vacuum chamber's pumps.
The interferometer resulting from this NASA partnership can be used to evaluate future mirrors that need to be tested in vacuum chambers where vibration is a problem.
Medical Industry: Eye Health
New "wavefront" optical measurement devices and techniques were created for making the Webb telescope mirrors. Those have led to spinoffs in the medical industry where precise measurements are critical in eye health, for example.
The technology came about to accurately measure the James Webb Space Telescope primary mirror segments during manufacturing. Scientists at AMO WaveFront Sciences, LLC of Albuquerque, N.M. developed a new "wavefront" measurement device called a Scanning Shack Hartmann Sensor.
The optical measuring technology developed for the Webb, called "wavefront sensing" has been applied to the measurement of the human eye and allowed for significant improvements. "The Webb telescope program has enabled a number of improvements in measurement of human eyes, diagnosis of ocular diseases and potentially improved surgery," said Dan Neal, Director of Research and Development Abbott Medical Optics Inc. in Albuquerque, N.M. The Webb improvements have enabled eye doctors to get much more detailed information about the shape and "topography" of the eye in seconds rather than hours.
Materials Industry: Measuring Strength
Webb technologies have opened the door to better measurement in testing the strength of composite materials. Measuring strain in composite materials is the same as measuring how much they change in certain environments. Measuring step heights allows one to understand very small changes in a surface profile and doing all of this at high speed allows the device to work even in the presence of vibration that would normally blur the results.
Webb telescope technologies have also been beneficial to the economy. The technologies have enabled private sector companies such as 4D to generate significant revenue and create high-skill jobs. Much of 4D's growth from a two man start-up to over 35 people can be traced to projects originally developed for the telescope. 4D has also been able to adapt these technologies for a wide range of applications within the astronomy, aerospace, semiconductor and medical industries.
In the future, other industries may benefit from other Webb telescope technologies.
Further Reading
Thursday, April 19, 2012
#MEDICAL: "Migraines? Shock Them Our of Your Brain"
Electrical stimulation of the brain can mitigate migraine headaches according to researchers. Unfortunately, the results are still preliminary, but they do hold out hope for an drug-free electronic cure for migraines. R. Colin Johnson
Placement of the electrodes used to electrically stimulate the brain to mitigate chronic migraine headaches.
Here is what the researchers say about their own discovery: Chronic migraine sufferers saw significant pain relief after four weeks of electrical brain stimulation in the part of the brain responsible for voluntary movement, the motor cortex, according to a new study by researchers from the University of Michigan School of Dentistry, Harvard University and the City College of the City University of New York. The researchers used a noninvasive method called transcranial direct current stimulation (tDCS) as a preventative migraine therapy on 13 patients with chronic migraine, or at least15 attacks a month. After 10 sessions, participants reported an average 37 percent decrease in pain intensity.
The effects were cumulative and kicked in after about four weeks of treatment, said Alexandre DaSilva, assistant professor at the U-M School of Dentistry and lead author of the study, which appears in the journal Headache.
The researchers also tracked the electric current flow through the brain to learn how the therapy affected different regions.
They did this by using a high-resolution computational model. They correctly predicted that the electric current would go where directed by the electrodes placed on the subject's head, but the current also flowed through other critical regions of the brain associated with how we perceive and modulate pain.
Other studies have shown that stimulation of the motor cortex reduces chronic pain. However, this study provided the first known mechanistic evidence that tDCS of the motor cortex might work as an ongoing preventive therapy in complex, chronic migraine cases, where attacks are more frequent and resilient to conventional treatments.
While the results are encouraging, any clinical application is a long way off, DaSilva said.
Further Reading
Here is what the researchers say about their own discovery: Chronic migraine sufferers saw significant pain relief after four weeks of electrical brain stimulation in the part of the brain responsible for voluntary movement, the motor cortex, according to a new study by researchers from the University of Michigan School of Dentistry, Harvard University and the City College of the City University of New York. The researchers used a noninvasive method called transcranial direct current stimulation (tDCS) as a preventative migraine therapy on 13 patients with chronic migraine, or at least15 attacks a month. After 10 sessions, participants reported an average 37 percent decrease in pain intensity.
The effects were cumulative and kicked in after about four weeks of treatment, said Alexandre DaSilva, assistant professor at the U-M School of Dentistry and lead author of the study, which appears in the journal Headache.
The researchers also tracked the electric current flow through the brain to learn how the therapy affected different regions.
They did this by using a high-resolution computational model. They correctly predicted that the electric current would go where directed by the electrodes placed on the subject's head, but the current also flowed through other critical regions of the brain associated with how we perceive and modulate pain.
Other studies have shown that stimulation of the motor cortex reduces chronic pain. However, this study provided the first known mechanistic evidence that tDCS of the motor cortex might work as an ongoing preventive therapy in complex, chronic migraine cases, where attacks are more frequent and resilient to conventional treatments.
While the results are encouraging, any clinical application is a long way off, DaSilva said.
Further Reading
Wednesday, April 18, 2012
#WIRELESS: "Radio Frequency Identification Skyrockets"
Radio Frequency Identification (RFID) tags will grow by over $1 billion in 2012, on track the generate tens of billions over the next five years, according to Allied Business Intelligence (ABI Research, Oyster Bay, N.Y.) While near-field communications (NFC) is now also offering ultra-inexpensive identification tag technologies that can be scanned with the readers built into many smartphones, RFIDs is a more mature technology and will likely remain dominant in commercial applications such as inventory control. R. Colin Johnson
Here is what ABI Research says about their new market report on RFIDs: The market for RFID transponders, readers, software, and services will generate $70.5 billion from 2012 to the end of 2017. The market was boosted by a growth of $900 million in 2011 and the market is expected to grow 20% YOY per annum. Government, retail, and transportation and logistics have been identified as the most valuable sectors, accounting for 60% of accumulated revenue over the next five years.
ABI Research’s new study, “RFID Market by Application and Vertical Sector,” provides a comprehensive overview and summary of the impact that the latest product launches, new entrants, and changing market dynamics will have on the future direction and evolution of the market. It provides an excellent introduction and guide for those new to the market, as well as a timely update for those experienced within the RFID market.
Further Reading
Here is what ABI Research says about their new market report on RFIDs: The market for RFID transponders, readers, software, and services will generate $70.5 billion from 2012 to the end of 2017. The market was boosted by a growth of $900 million in 2011 and the market is expected to grow 20% YOY per annum. Government, retail, and transportation and logistics have been identified as the most valuable sectors, accounting for 60% of accumulated revenue over the next five years.
ABI Research’s new study, “RFID Market by Application and Vertical Sector,” provides a comprehensive overview and summary of the impact that the latest product launches, new entrants, and changing market dynamics will have on the future direction and evolution of the market. It provides an excellent introduction and guide for those new to the market, as well as a timely update for those experienced within the RFID market.
Further Reading
Tuesday, April 17, 2012
#MATERIALS: "Graphene to Replace Conductors/Semiconductors/Insulators""
Graphene promises to replace conductors, semiconductors and insulators in future electronic devices, resulting in feather-light devices with week-long battery lifetimes. To manufacture these new materials economically, these researchers propose a new formulation which sidesteps the issues preventing graphene from being mass produced today. R. Colin Johnson
Physics Professor Michael Weinert and engineering graduate student Haihui Pu display the atomic structure on GMO. (Photos by Alan Magayne-Roshak)
Here is what the University of Michigan says about its discovery: Scientists and engineers at the University of Wisconsin-Milwaukee (UWM) have discovered an entirely new carbon-based material that is synthesized from the “wonder kid” of the carbon family, graphene. The discovery, which the researchers are calling “graphene monoxide (GMO),” pushes carbon materials closer to ushering in next-generation electronics.
Graphene, a one-atom-thick layer of carbon that resembles a flat sheet of chicken wire at nanoscale, has the potential to revolutionize electronics because it conducts electricity much better than the gold and copper wires used in current devices. Transistors made of silicon are approaching the minimum size at which they can be effective, meaning the speed of devices will soon bottom out. Carbon materials at nanoscale could be the remedy.
Now all three characteristics of electrical conductivity – conducting, insulating and semiconducting – are found in the carbon family, offering needed compatibility for use in future electronics.
Currently, applications for graphene are limited because it’s too expensive to mass produce. Another problem is that, until now, graphene-related materials existed only as conductors or insulators.
GMO exhibits characteristics that will make it easier to scale up than graphene. And, like silicon in the current generation of electronics, GMO is semiconducting, necessary for controlling the electrical current in such a strong conductor as graphene. Now all three characteristics of electrical conductivity – conducting, insulating and semiconducting – are found in the carbon family, offering needed compatibility for use in future electronics.
The team created GMO while conducting research into the behavior of a hybrid nanomaterial engineered by Chen that consists of carbon nanotubes (essentially, graphene rolled into a cylinder) decorated with tin oxide nanoparticles. Chen uses his hybrid material to make high-performance, energy-efficient and inexpensive sensors.
To image the hybrid material as it was sensing, he and physics professor Marija Gajdardziska used a high-resolution transmission electron microscope (HRTEM). But to explain what was happening, the pair needed to know which molecules were attaching to the nanotube surface, which were attaching to the tin oxide surface, and how they changed upon attachment.
So the pair turned to physics professor Carol Hirschmugl, who recently pioneered a method of infrared imaging (IR) that not only offers high-definition images of samples, but also renders a chemical “signature” that identifies which atoms are interacting as sensing occurs.
Chen and Gajdardziska knew they would need to look at more attachment sites than are available on the surface of a carbon nanotube. So they “unrolled” the nanotube into a sheet of graphene to achieve a larger area.
That prompted them to search for ways to make graphene from its cousin, graphene oxide (GO), an insulator that can be scaled up inexpensively. GO consists of layers of graphene stacked on top of one another in an unaligned orientation. It is the subject of much research as scientists look for cheaper ways to replicate graphene’s superior properties.
Physics senior scientist Marvin Schofield (standing), physics doctoral student Eric Mattson, and Graduate School associate dean and physics professor Marija Gajdardziska examine the images of GMO using Selected Area Electron Diffraction (SAED) in a transmission electron microscope.
In one experiment, they heated the GO in a vacuum to reduce oxygen. Instead of being destroyed, however, the carbon and oxygen atoms in the layers of GO became aligned, transforming themselves into the “ordered,” semiconducting GMO – a carbon oxide that does not exist in nature.
At different high temperatures, the team actually produced four new materials that they collectively refer to as GMO. They captured video of the process using Selected Area Electron Diffraction (SAED) in a transmission electron microscope.
Because GMO is formed in single sheets, Gajdardziska says the material could have applications in products that involve surface catalysis. She, Hirschmugl and Chen also are exploring its use in the anode parts of lithium-ion batteries, which could make them more efficient.
But the next step is more science. The team will need to find out what triggered the reorganization of the material, and also what conditions would ruin the GMO’s formation.
The team had to be careful in calculating how electrons flowed across GMO, he adds. Interactions that occur had to be interpreted through a painstaking process of tracking indicators of structure and then eliminating those that didn’t fit.
Further Reading
Physics Professor Michael Weinert and engineering graduate student Haihui Pu display the atomic structure on GMO. (Photos by Alan Magayne-Roshak)
Here is what the University of Michigan says about its discovery: Scientists and engineers at the University of Wisconsin-Milwaukee (UWM) have discovered an entirely new carbon-based material that is synthesized from the “wonder kid” of the carbon family, graphene. The discovery, which the researchers are calling “graphene monoxide (GMO),” pushes carbon materials closer to ushering in next-generation electronics.
Graphene, a one-atom-thick layer of carbon that resembles a flat sheet of chicken wire at nanoscale, has the potential to revolutionize electronics because it conducts electricity much better than the gold and copper wires used in current devices. Transistors made of silicon are approaching the minimum size at which they can be effective, meaning the speed of devices will soon bottom out. Carbon materials at nanoscale could be the remedy.
Now all three characteristics of electrical conductivity – conducting, insulating and semiconducting – are found in the carbon family, offering needed compatibility for use in future electronics.
Currently, applications for graphene are limited because it’s too expensive to mass produce. Another problem is that, until now, graphene-related materials existed only as conductors or insulators.
GMO exhibits characteristics that will make it easier to scale up than graphene. And, like silicon in the current generation of electronics, GMO is semiconducting, necessary for controlling the electrical current in such a strong conductor as graphene. Now all three characteristics of electrical conductivity – conducting, insulating and semiconducting – are found in the carbon family, offering needed compatibility for use in future electronics.
The team created GMO while conducting research into the behavior of a hybrid nanomaterial engineered by Chen that consists of carbon nanotubes (essentially, graphene rolled into a cylinder) decorated with tin oxide nanoparticles. Chen uses his hybrid material to make high-performance, energy-efficient and inexpensive sensors.
To image the hybrid material as it was sensing, he and physics professor Marija Gajdardziska used a high-resolution transmission electron microscope (HRTEM). But to explain what was happening, the pair needed to know which molecules were attaching to the nanotube surface, which were attaching to the tin oxide surface, and how they changed upon attachment.
So the pair turned to physics professor Carol Hirschmugl, who recently pioneered a method of infrared imaging (IR) that not only offers high-definition images of samples, but also renders a chemical “signature” that identifies which atoms are interacting as sensing occurs.
Chen and Gajdardziska knew they would need to look at more attachment sites than are available on the surface of a carbon nanotube. So they “unrolled” the nanotube into a sheet of graphene to achieve a larger area.
That prompted them to search for ways to make graphene from its cousin, graphene oxide (GO), an insulator that can be scaled up inexpensively. GO consists of layers of graphene stacked on top of one another in an unaligned orientation. It is the subject of much research as scientists look for cheaper ways to replicate graphene’s superior properties.
Physics senior scientist Marvin Schofield (standing), physics doctoral student Eric Mattson, and Graduate School associate dean and physics professor Marija Gajdardziska examine the images of GMO using Selected Area Electron Diffraction (SAED) in a transmission electron microscope.
In one experiment, they heated the GO in a vacuum to reduce oxygen. Instead of being destroyed, however, the carbon and oxygen atoms in the layers of GO became aligned, transforming themselves into the “ordered,” semiconducting GMO – a carbon oxide that does not exist in nature.
At different high temperatures, the team actually produced four new materials that they collectively refer to as GMO. They captured video of the process using Selected Area Electron Diffraction (SAED) in a transmission electron microscope.
Because GMO is formed in single sheets, Gajdardziska says the material could have applications in products that involve surface catalysis. She, Hirschmugl and Chen also are exploring its use in the anode parts of lithium-ion batteries, which could make them more efficient.
But the next step is more science. The team will need to find out what triggered the reorganization of the material, and also what conditions would ruin the GMO’s formation.
The team had to be careful in calculating how electrons flowed across GMO, he adds. Interactions that occur had to be interpreted through a painstaking process of tracking indicators of structure and then eliminating those that didn’t fit.
Further Reading
#3D: "LG Aggregates Stereoscopic for Smart TVs"
3D stereoscopic TV channels have not been particularly successful what with the spotty adoption of the technology. LG aims to remedy this by aggregating 3D stereoscopic content for its Smart TV users who for sure have 3D screens with which to view it. Called the 3D World portal, the service is available worldwide. R. Colin Johnson
Here is what LG says about its new 3D World portal: LG Electronics (LG) announced the worldwide opening of 3D World, a premium content service that will be available to LG’s CINEMA 3D Smart TV users in nearly 70 countries. With DNA from LG’s original 3D Zone Smart TV app launched last year, 3D World gives LG customers access to an expansive selection of high quality 3D content via a “card” on the Home Dashboard.
3D World allows customers the ability to search through high quality 3D content across numerous content categories such as entertainment, sports, documentary, kids, and lifestyle. Once the content is selected, LG’s CINEMA 3D Smart TV brings it to life in beautifully rendered 3D images. The action scenes in sports become more dynamic and exciting, documentaries more realistic, educational videos in the kids’ category more captivating.
Whether it’s cooking, travel, fashion or any other interest, there’s something for everyone. In addition to the content, LG plans to pursue further collaborations with global 3D content providers in order to bring the most sought after 3D content to LG customers.
3D World will be offered in app-format for CINEMA 3D Smart TVs that were produced in 2011, while 2012 models will use the streamlined card system on the Home Dashboard. Ω
Here is what LG says about its new 3D World portal: LG Electronics (LG) announced the worldwide opening of 3D World, a premium content service that will be available to LG’s CINEMA 3D Smart TV users in nearly 70 countries. With DNA from LG’s original 3D Zone Smart TV app launched last year, 3D World gives LG customers access to an expansive selection of high quality 3D content via a “card” on the Home Dashboard.
3D World allows customers the ability to search through high quality 3D content across numerous content categories such as entertainment, sports, documentary, kids, and lifestyle. Once the content is selected, LG’s CINEMA 3D Smart TV brings it to life in beautifully rendered 3D images. The action scenes in sports become more dynamic and exciting, documentaries more realistic, educational videos in the kids’ category more captivating.
Whether it’s cooking, travel, fashion or any other interest, there’s something for everyone. In addition to the content, LG plans to pursue further collaborations with global 3D content providers in order to bring the most sought after 3D content to LG customers.
3D World will be offered in app-format for CINEMA 3D Smart TVs that were produced in 2011, while 2012 models will use the streamlined card system on the Home Dashboard. Ω
Monday, April 16, 2012
#WIRELESS: "Internet-of-Things Dwarfs People-to-People Comm"
Not so long ago, alarmists fretted about running out of Internet Protocol domain space. Then IPv6 opened up plenty of addresses for machine-to-machine (M2M) communications and the much-hyped Internet of Things. The challenge now becomes making sense of all the sensor data that will stream from our myriad connected devices to the cloud. And the opportunity becomes crafting the applications that address society’s problems today and anticipate the unforeseen needs of tomorrow.
ZigBee technology can enable the connected home by letting devices such as lights, thermostats, security sensors, smart meters and in-home displays communicate with one another to create safer, greener, more comfortable living environments. SOURCE: Ember
The Internet addressing system conceived in 1977 at the U.S. Department of Defense by Vint Cerf, today chief Internet evangelist at Google, used 32-bit Internet Protocol (IP) addresses to connect people to people, providing more than 4.3 billion unique hosts for trusted user accounts. As the Internet began to be dominated by M2M connections, a revised, 128-bit scheme (IPv6) was adopted to allow for 18 billion billion hosts, accommodating more than 300 trillion trillion trillion secure devices.
Now there is more than enough address space, along with Internet Protocol Security (IPsec), to accommodate the universe of cloud-ready devices that IBM Corp. last year predicted would surpass 1 trillion nodes by 2015.
Further Reading
ZigBee technology can enable the connected home by letting devices such as lights, thermostats, security sensors, smart meters and in-home displays communicate with one another to create safer, greener, more comfortable living environments. SOURCE: Ember
The Internet addressing system conceived in 1977 at the U.S. Department of Defense by Vint Cerf, today chief Internet evangelist at Google, used 32-bit Internet Protocol (IP) addresses to connect people to people, providing more than 4.3 billion unique hosts for trusted user accounts. As the Internet began to be dominated by M2M connections, a revised, 128-bit scheme (IPv6) was adopted to allow for 18 billion billion hosts, accommodating more than 300 trillion trillion trillion secure devices.
Now there is more than enough address space, along with Internet Protocol Security (IPsec), to accommodate the universe of cloud-ready devices that IBM Corp. last year predicted would surpass 1 trillion nodes by 2015.
Further Reading
#MATERIALS: IBM and ST Closing Gap with Intel"
Intel is so far ahead of the rest of the world that it is hard to image anyone catching up, but silicon wafer maker Soitec S.A. claims that chip makers can bridge the gap in a single leap by using their prefabbed wafers to sidestep the years of development work needed to perfect Intel-like fully-depleted (FD) transistors.
STMicrosystems and ST-Erikson are already using Soitec's 2D silicon-on-insulator (SOI) wafers for its planar system-on-chip (SoCs).
Just switch to Soitec's silicon-on-insulator (SOI) wafers and you can trim years off your catch-up efforts, a claim that has already convinced STMicroelectronics NV, ST-Erikson and IBM Corp. to give Soitec a try.
IBM is using 3D silicon-on-insulator (SOI) wafers for its 14 nanometer 3D FinFETs due circa 2015.
Further Reading
STMicrosystems and ST-Erikson are already using Soitec's 2D silicon-on-insulator (SOI) wafers for its planar system-on-chip (SoCs).
Just switch to Soitec's silicon-on-insulator (SOI) wafers and you can trim years off your catch-up efforts, a claim that has already convinced STMicroelectronics NV, ST-Erikson and IBM Corp. to give Soitec a try.
IBM is using 3D silicon-on-insulator (SOI) wafers for its 14 nanometer 3D FinFETs due circa 2015.
Further Reading
#CHIPS: "Moores' Law Extended for Multi-Core Scaling"
Today, direct-write cache memories are the mainstay of microprocessors, since they lower memory latency in a manner transparent to application programs. However, designers of advanced processors have advocated a switch to software-managed scratchpads and message-passing techniques for next-generation multi-core processors, such as the Cell Broadband Engine Architecture developed by IBM, Toshiba and Sony, which is used for the PlayStation 3.
Unfortunately, software-managed scratchpads and message-passing techniques put an additional burden on application programmers and in that sense mark a step backwards in microprocessor evolution. Now Semiconductor Research Corp. (SRC) claims to have solved the scaling problem for next-generation processors with up to 512 cores, by using hierarchical hardware coherence that remains transparent to application programs as the natural evolution of today's multi-level caches.
Further Reading
Unfortunately, software-managed scratchpads and message-passing techniques put an additional burden on application programmers and in that sense mark a step backwards in microprocessor evolution. Now Semiconductor Research Corp. (SRC) claims to have solved the scaling problem for next-generation processors with up to 512 cores, by using hierarchical hardware coherence that remains transparent to application programs as the natural evolution of today's multi-level caches.
Further Reading
Friday, April 13, 2012
#MARKETS: "Cool Product Expo is...well...Cool"
The Stanford University Cool Product Expo showcased the coolest new products and companies in the field of manufacturing and design. Attendees this year encountered budding start-ups, university research projects, the latest concept-products from global manufacturers, and the best from local design studios in the Palo Alto research community. R. Colin Johnson
Coincident.TV (a San Francisco based startup & participant at CPX 2011) created a clickable interactive video tour that showcases many of the exhibiting companies and their very cool products. Click "Further Reading" (below) to try it out.
Here is what Stanford University says about the Cool Product Expo: The Cool Product Expo 2012 will be one for the memory books! Thanks to all of our exhibitors, and a special thanks to all those who attended the Expo despite the day’s rain. From QnQ’s virtual DJ system in one corner to 3D System’s Cube printer in another, the entire Expo was filled with innovative products and ideas that delighted and amazed.
Further Reading
Coincident.TV (a San Francisco based startup & participant at CPX 2011) created a clickable interactive video tour that showcases many of the exhibiting companies and their very cool products. Click "Further Reading" (below) to try it out.
Here is what Stanford University says about the Cool Product Expo: The Cool Product Expo 2012 will be one for the memory books! Thanks to all of our exhibitors, and a special thanks to all those who attended the Expo despite the day’s rain. From QnQ’s virtual DJ system in one corner to 3D System’s Cube printer in another, the entire Expo was filled with innovative products and ideas that delighted and amazed.
Further Reading
#ALGORITHMS: "Variable Clock Speeds Multi-Core Development"
Developing new multi-core processors has been hampered by simulation techniques that don't work well with the divide-and-conquer strategies adopted by parallel processing algorithms using many cores simultaneously. Now MIT claims to have solved these problems with its Arete system that varies the ratio between real clock cycles and simulated clock cycles in a field-programmable-gate array that speeds development while not sacrificing accuracy. R. Colin Johnson
Arete makes use of an Xilinx field-programmable gate array, or FPGA (center). Photo: Thomas.L/flickr
Here is what MIT says about Arete:Most computer chips today have anywhere from four to 10 separate cores, or processing units, which can work in parallel, increasing the chips’ efficiency. But the chips of the future are likely to have hundreds or even thousands of cores.
For chip designers, predicting how these massively multicore chips will behave is no easy task. Software simulations work up to a point, but more accurate simulations typically require hardware models — programmable chips that can be reconfigured to mimic the behavior of multicore chips.
At the IEEE International Symposium on Performance Analysis of Systems and Software earlier this month, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a new method for improving the efficiency of hardware simulations of multicore chips. Unlike competing methods, it guarantees that the simulator won’t go into “deadlock” — a state in which cores get stuck waiting for each other to relinquish system resources, such as memory. The method should also make it easier for designers to develop simulations and for outside observers to understand what those simulations are intended to do.
Hardware simulations of multicore chips typically use devices called field-programmable gate arrays, or FPGAs. An FPGA is a chip with an array of simple circuits and memory cells that can be hooked together in novel configurations after the chip has left the factory. The chips sold by some small-market manufacturers are, in fact, specially configured FPGAs.
Chip architects using FPGAs to test multicore-chip designs, however, must simulate the complex circuitry found in general-purpose microprocessors. One way to do that is to hook together a lot of the FPGA’s simple circuits, but that consumes so many of them so rapidly that the simulator ends up modeling only a small portion of the whole chip design. The other approach is to simulate the complex circuits’ behavior in stages — using a partial circuit but spending, say, eight clock cycles on a calculation that, in a real chip, would take only one. Traditionally, however, that’s meant slowing down the whole simulation, to allow eight real clock cycles per one simulated cycle.
For a simulation system they’ve dubbed Arete, graduate students Asif Khan and Muralidaran Vijayaraghavan; their adviser, Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science; and Silas Boyd-Wickizer, a CSAIL graduate student in the Parallel and Distributed Operating Systems Group, adopted the second approach, but they developed a circuit design that allows the ratio between real clock cycles and simulated cycles to fluctuate as needed. That allows for faster simulations and more economical use of the FPGA’s circuitry.
Every logic circuit has some number of input wires and some number of output wires, and the CSAIL researchers associate a little bit of memory with each such wire. Data coming in on a wire is stored in memory until all the operations that require it have been performed; data going out on a wire is stored in memory until the data going out on the other wires has been computed, too. Once all the outputs have been determined, the input data is erased, signaling the completion of one simulated clock cycle. Depending on the complexity of the calculation the circuit was performing, the simulated clock cycle could correspond to one real clock cycle, or eight, or something in between.
The CSAIL researchers argue that its easier for outside observers — and even for chip designers themselves — to understand what a simulation is intended to do. The researchers’ high-level language, which they dubbed StructuralSpec, builds on the BlueSpec hardware design language that Arvind’s group helped develop in the late 1990s and early 2000s. The StructuralSpec user gives a high-level specification of a multicore model, and software spits out the code that implements that model on an FPGA. Where a typical, hand-coded hardware model might have about 30,000 lines of code, Khan says, a similar model implemented on StructuralSpec might have only 8,000 lines of code.
Kattamuri Ekanadham, a chip researcher at IBM’s T. J. Watson Laboratory, is currently building his own implementation of the MIT researchers’ simulator.
Further Reading
Arete makes use of an Xilinx field-programmable gate array, or FPGA (center). Photo: Thomas.L/flickr
Here is what MIT says about Arete:Most computer chips today have anywhere from four to 10 separate cores, or processing units, which can work in parallel, increasing the chips’ efficiency. But the chips of the future are likely to have hundreds or even thousands of cores.
For chip designers, predicting how these massively multicore chips will behave is no easy task. Software simulations work up to a point, but more accurate simulations typically require hardware models — programmable chips that can be reconfigured to mimic the behavior of multicore chips.
At the IEEE International Symposium on Performance Analysis of Systems and Software earlier this month, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a new method for improving the efficiency of hardware simulations of multicore chips. Unlike competing methods, it guarantees that the simulator won’t go into “deadlock” — a state in which cores get stuck waiting for each other to relinquish system resources, such as memory. The method should also make it easier for designers to develop simulations and for outside observers to understand what those simulations are intended to do.
Hardware simulations of multicore chips typically use devices called field-programmable gate arrays, or FPGAs. An FPGA is a chip with an array of simple circuits and memory cells that can be hooked together in novel configurations after the chip has left the factory. The chips sold by some small-market manufacturers are, in fact, specially configured FPGAs.
Chip architects using FPGAs to test multicore-chip designs, however, must simulate the complex circuitry found in general-purpose microprocessors. One way to do that is to hook together a lot of the FPGA’s simple circuits, but that consumes so many of them so rapidly that the simulator ends up modeling only a small portion of the whole chip design. The other approach is to simulate the complex circuits’ behavior in stages — using a partial circuit but spending, say, eight clock cycles on a calculation that, in a real chip, would take only one. Traditionally, however, that’s meant slowing down the whole simulation, to allow eight real clock cycles per one simulated cycle.
For a simulation system they’ve dubbed Arete, graduate students Asif Khan and Muralidaran Vijayaraghavan; their adviser, Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science; and Silas Boyd-Wickizer, a CSAIL graduate student in the Parallel and Distributed Operating Systems Group, adopted the second approach, but they developed a circuit design that allows the ratio between real clock cycles and simulated cycles to fluctuate as needed. That allows for faster simulations and more economical use of the FPGA’s circuitry.
Every logic circuit has some number of input wires and some number of output wires, and the CSAIL researchers associate a little bit of memory with each such wire. Data coming in on a wire is stored in memory until all the operations that require it have been performed; data going out on a wire is stored in memory until the data going out on the other wires has been computed, too. Once all the outputs have been determined, the input data is erased, signaling the completion of one simulated clock cycle. Depending on the complexity of the calculation the circuit was performing, the simulated clock cycle could correspond to one real clock cycle, or eight, or something in between.
The CSAIL researchers argue that its easier for outside observers — and even for chip designers themselves — to understand what a simulation is intended to do. The researchers’ high-level language, which they dubbed StructuralSpec, builds on the BlueSpec hardware design language that Arvind’s group helped develop in the late 1990s and early 2000s. The StructuralSpec user gives a high-level specification of a multicore model, and software spits out the code that implements that model on an FPGA. Where a typical, hand-coded hardware model might have about 30,000 lines of code, Khan says, a similar model implemented on StructuralSpec might have only 8,000 lines of code.
Kattamuri Ekanadham, a chip researcher at IBM’s T. J. Watson Laboratory, is currently building his own implementation of the MIT researchers’ simulator.
Further Reading
#CHIPS: "Magnetic Metrology Boosts Microchip Reliability"
A magnetic metrology technique aims to boost the reliability of semiconductor microchips, solar cells, and micro-electro-mechaniocal systems (MEMS). Invented at the Georgia Institute of Technology (Georgia Tech, Atlanta) the magnetically actuated peel test (MAPT) requires no hardware fixtures or physical contact of any kind, and yet can accurately measure how well electronic devices are bonded together, and thus predict their lifetimes better than today's destructive methods. R. Colin Johnson
Magnetically actuated peel test (MAPT) places an electromagnet on one side of a sample and an optical profiler on the other.
Here is what Georgia Tech says about its new magnetic metrology technique: Taking advantage of the force generated by magnetic repulsion, researchers have developed a new technique for measuring the adhesion strength between thin films of materials used in microelectronic devices, photovoltaic cells and microelectromechanical systems (MEMS).
The fixtureless and noncontact technique, known as the magnetically actuated peel test (MAPT), could help ensure the long-term reliability of electronic devices, and assist designers in improving resistance to thermal and mechanical stresses.
Developed by Suresh Sitaraman, a professor in the George W. Woodruff School of Mechanical Engineering at the Georgia Institute of Technology, the research has been supported by the National Science Foundation.
Modern microelectronic chips are fabricated from layers of different materials – insulators and conductors – applied on top of one another. Thermal stress can be created when heat generated during the operation of the devices causes the materials of adjacent layers to expand, which occurs at different rates in different materials. The stress can cause the layers to separate, a process known as delamination or de-bonding, which is a major cause of microelectronics failure.
Sitaraman and doctoral candidate Gregory Ostrowicki have used their technique to measure the adhesion strength between layers of copper conductor and silicon dioxide insulator. They also plan to use it to study fatigue cycling failure, which occurs over time as the interface between layers is repeatedly placed under stress. The technique may also be used to study adhesion between layers in photovoltaic systems and in MEMS devices.
The Georgia Tech researchers first used standard microelectronic fabrication techniques to grow layers of thin films that they want to evaluate on a silicon wafer. At the center of each sample, they bonded a tiny permanent magnet made of nickel-plated neodymium (NdFeB), connected to three ribbons of thin-film copper grown atop silicon dioxide on a silicon wafer.
The sample was then placed into a test station that consists of an electromagnet below the sample and an optical profiler above it. Voltage supplied to the electromagnet was increased over time, creating a repulsive force between the like magnetic poles. Pulled upward by the repulsive force on the permanent magnet, the copper ribbons stretched until they finally delaminated.
With data from the optical profiler and knowledge of the magnetic field strength, the researchers can provide an accurate measure of the force required to delaminate the sample. The magnetic actuation has the advantage of providing easily controlled force consistently perpendicular to the silicon wafer.
Because many samples can be made at the same time on the same wafer, the technique can be used to generate a large volume of adhesion data in a timely fashion.
But device failure often occurs gradually over time as the layers are subjected to the stresses of repeated heating and cooling cycles. To study this fatigue failure, Sitaraman and Ostrowicki plan to cycle the electromagnet’s voltage on and off.
The test station is small enough to fit into an environmental chamber, allowing the researchers to evaluate the effects of high temperature and/or high humidity on the strength of the thin film adhesion. This is particularly useful for electronics intended for harsh conditions, such as automobile engine control systems or aircraft avionics, Sitaraman said.
So far, Sitaraman and Ostrowicki have studied thin film layers about one micron in thickness, but say their technique will work on layers that are of sub-micron thickness. Because their test layers are made using standard microelectronic fabrication techniques in Georgia Tech’s clean rooms, Sitaraman believes they accurately represent the conditions of real devices.
As device sizes continue to decline, Sitaraman says the interfacial issues will grow more important.
Further Reading
Magnetically actuated peel test (MAPT) places an electromagnet on one side of a sample and an optical profiler on the other.
Here is what Georgia Tech says about its new magnetic metrology technique: Taking advantage of the force generated by magnetic repulsion, researchers have developed a new technique for measuring the adhesion strength between thin films of materials used in microelectronic devices, photovoltaic cells and microelectromechanical systems (MEMS).
The fixtureless and noncontact technique, known as the magnetically actuated peel test (MAPT), could help ensure the long-term reliability of electronic devices, and assist designers in improving resistance to thermal and mechanical stresses.
Developed by Suresh Sitaraman, a professor in the George W. Woodruff School of Mechanical Engineering at the Georgia Institute of Technology, the research has been supported by the National Science Foundation.
Modern microelectronic chips are fabricated from layers of different materials – insulators and conductors – applied on top of one another. Thermal stress can be created when heat generated during the operation of the devices causes the materials of adjacent layers to expand, which occurs at different rates in different materials. The stress can cause the layers to separate, a process known as delamination or de-bonding, which is a major cause of microelectronics failure.
Sitaraman and doctoral candidate Gregory Ostrowicki have used their technique to measure the adhesion strength between layers of copper conductor and silicon dioxide insulator. They also plan to use it to study fatigue cycling failure, which occurs over time as the interface between layers is repeatedly placed under stress. The technique may also be used to study adhesion between layers in photovoltaic systems and in MEMS devices.
The Georgia Tech researchers first used standard microelectronic fabrication techniques to grow layers of thin films that they want to evaluate on a silicon wafer. At the center of each sample, they bonded a tiny permanent magnet made of nickel-plated neodymium (NdFeB), connected to three ribbons of thin-film copper grown atop silicon dioxide on a silicon wafer.
The sample was then placed into a test station that consists of an electromagnet below the sample and an optical profiler above it. Voltage supplied to the electromagnet was increased over time, creating a repulsive force between the like magnetic poles. Pulled upward by the repulsive force on the permanent magnet, the copper ribbons stretched until they finally delaminated.
With data from the optical profiler and knowledge of the magnetic field strength, the researchers can provide an accurate measure of the force required to delaminate the sample. The magnetic actuation has the advantage of providing easily controlled force consistently perpendicular to the silicon wafer.
Because many samples can be made at the same time on the same wafer, the technique can be used to generate a large volume of adhesion data in a timely fashion.
But device failure often occurs gradually over time as the layers are subjected to the stresses of repeated heating and cooling cycles. To study this fatigue failure, Sitaraman and Ostrowicki plan to cycle the electromagnet’s voltage on and off.
The test station is small enough to fit into an environmental chamber, allowing the researchers to evaluate the effects of high temperature and/or high humidity on the strength of the thin film adhesion. This is particularly useful for electronics intended for harsh conditions, such as automobile engine control systems or aircraft avionics, Sitaraman said.
So far, Sitaraman and Ostrowicki have studied thin film layers about one micron in thickness, but say their technique will work on layers that are of sub-micron thickness. Because their test layers are made using standard microelectronic fabrication techniques in Georgia Tech’s clean rooms, Sitaraman believes they accurately represent the conditions of real devices.
As device sizes continue to decline, Sitaraman says the interfacial issues will grow more important.
Further Reading
Thursday, April 12, 2012
#ENERGY: "Living Fuel Cells Harness Biology"
By implanting fuel cells in living things, the future may hold a bold-new-world of using biological organisms to create energy for future societies. Just as lab mice are now patented, does the world hold a future of animals owned by energy-generating corporations? R. Colin Johnson
Cyborg snail harbors a fuel cell implanted by Clarkson University scientist Evgeny Katz.
Here is what the Journal of the American Chemical Society says about living fuel cells: The world’s first “electrified snail” has joined the menagerie of cockroaches, rats, rabbits and other animals previously implanted with biofuel cells that generate electricity — perhaps for future spy cameras, eavesdropping microphones and other electronics — from natural sugar in their bodies. Scientists are describing how their new biofuel cell worked for months in a free-living snail in the Journal of the American Chemical Society.
In the report, Evgeny Katz and colleagues point out that many previous studies have involved “potentially implantable” biofuel cells. So far, however, none has produced an implanted biofuel cell in a small live animal that could generate electricity for an extended period of time without harming the animal.
To turn a living snail into a power source, the researchers made two small holes in its shell and inserted high-tech electrodes made from compressed carbon nanotubes. They coated the highly conductive material with enzymes, which foster chemical reactions in animals’ bodies. Using a different enzyme on each electrode, one pulling electrons from glucose and another using those electrons to turn oxygen molecules into water, they induced an electric current. Importantly, the long-lasting enzymes could generate electricity again and again after the scientists fed and rested what they termed the “electrified” snail, which lived freely for several months with the implanted fuel cell.
Further Reading
Cyborg snail harbors a fuel cell implanted by Clarkson University scientist Evgeny Katz.
Here is what the Journal of the American Chemical Society says about living fuel cells: The world’s first “electrified snail” has joined the menagerie of cockroaches, rats, rabbits and other animals previously implanted with biofuel cells that generate electricity — perhaps for future spy cameras, eavesdropping microphones and other electronics — from natural sugar in their bodies. Scientists are describing how their new biofuel cell worked for months in a free-living snail in the Journal of the American Chemical Society.
In the report, Evgeny Katz and colleagues point out that many previous studies have involved “potentially implantable” biofuel cells. So far, however, none has produced an implanted biofuel cell in a small live animal that could generate electricity for an extended period of time without harming the animal.
To turn a living snail into a power source, the researchers made two small holes in its shell and inserted high-tech electrodes made from compressed carbon nanotubes. They coated the highly conductive material with enzymes, which foster chemical reactions in animals’ bodies. Using a different enzyme on each electrode, one pulling electrons from glucose and another using those electrons to turn oxygen molecules into water, they induced an electric current. Importantly, the long-lasting enzymes could generate electricity again and again after the scientists fed and rested what they termed the “electrified” snail, which lived freely for several months with the implanted fuel cell.
Further Reading
#SECURITY: "Know Your Social Networking Bill of Rights"
Why should you give up your privacy and other constitutionally protected rights just because you enjoy social networking? The first step to asserting your rights is to know them, as expressed in the inforgraphic below. Know your rights! R. Colin Johnson
Further Reading
Further Reading
#ALGORITHMS: "Analytics Solve Brain's Mystery"
Analytics has been harnessed to plumb the mysteries of the brain, by discovering rules for the behaviors of its neural networks that now allow algorithms to mimic its behavior without duplicating every little detail, according to the École Polytechnique Fédérale de Lausanne (EPFL). Although much work remains to realize this dream, the hope is that now brain models can now be run on reasonably sized computers and yet still obtain unparalleled accuracy. R. Colin Johnson
Here is what EPFL says about its latest results: The École Polytechnique Fédérale de Lausanne has discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties and its location in the brain.
Using data mining analytics EPFL's senior researcher on the project, Henry Markram, now believes reasonably sized models can predict the fundamental structure and functions of the brain. Since every aspect of the brain need not be modeled, the new technique greatly reduces the complexity of models of the brain, enabling them to be modeled using conventional computer resources, which opens the door to "predictive biology," the Holy Grail of the EPFL's Human Brain Project.
Within a cortical column, the basic processing unit of the mammalian brain, there are roughly 300 different neuronal types. These types are defined both by their anatomical structure and by their electrical properties, and their electrical properties are in turn defined by the combination of ion channels they present—the tiny pores in their cell membranes through which electrical current passes, which make communication between neurons possible.
Scientists would like to be able to predict, based on a minimal set of experimental data, which combination of ion channels a neuron presents.
They know that genes are often expressed together, perhaps because two genes share a common promoter—the stretch of DNA that allows a gene to be transcribed and, ultimately, translated into a functioning protein—or because one gene modifies the activity of another. The expression of certain gene combinations is therefore informative about a neuron’s characteristics, and Georges Khazen and co-workers hypothesized that they could extract rules from gene expression patterns to predict those characteristics.
The researchers took a dataset that Markram and others had collected a few years ago, in which they recorded the expression of 26 genes encoding ion channels in different neuronal types from the rat brain. They also had data classifying those types according to a neuron’s morphology, its electrophysiological properties and its position within the six, anatomically distinct layers of the cortex. They found that, based on the classification data alone, they could predict those previously measured ion channel patterns with 78 per cent accuracy. And when they added in a subset of data about the ion channels to the classification data, as input to their data-mining program, their analytics accuracy was boosted to 87 per cent for the more commonly occurring neuronal types.
According to team member, Felix Schürmann, the increased accuracy of their results shows that it is now possible for analytics to mine rules from a subset of data and use them to improve results without having to measure every aspect of a behavior. Once the rules have been validated in similar, but independently collected datasets, they can now be used to predict the entire complement of functions--here the ion channels presented by a given neuron--based simply on data about that neuron’s morphology, its electrical behavior and a few key genes that it expresses.
The researchers now hope to use analytics to derive such rules that express the roles of different genes in regulating transcription processes. The teams reasoning is that if rules exist for ion channels, they are also likely to exist for other aspects of brain organization. For example, the researchers believe it will be possible to predict where synapses are likely to form in neural networks, based on information about the ratio of neuronal types in that network. Knowledge of such rules could therefore usher in a new era of predictive biology, and accelerate progress towards a better understanding of the brain, as well as more manageable models the brain.
Further Reading
Here is what EPFL says about its latest results: The École Polytechnique Fédérale de Lausanne has discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties and its location in the brain.
Using data mining analytics EPFL's senior researcher on the project, Henry Markram, now believes reasonably sized models can predict the fundamental structure and functions of the brain. Since every aspect of the brain need not be modeled, the new technique greatly reduces the complexity of models of the brain, enabling them to be modeled using conventional computer resources, which opens the door to "predictive biology," the Holy Grail of the EPFL's Human Brain Project.
Within a cortical column, the basic processing unit of the mammalian brain, there are roughly 300 different neuronal types. These types are defined both by their anatomical structure and by their electrical properties, and their electrical properties are in turn defined by the combination of ion channels they present—the tiny pores in their cell membranes through which electrical current passes, which make communication between neurons possible.
Scientists would like to be able to predict, based on a minimal set of experimental data, which combination of ion channels a neuron presents.
They know that genes are often expressed together, perhaps because two genes share a common promoter—the stretch of DNA that allows a gene to be transcribed and, ultimately, translated into a functioning protein—or because one gene modifies the activity of another. The expression of certain gene combinations is therefore informative about a neuron’s characteristics, and Georges Khazen and co-workers hypothesized that they could extract rules from gene expression patterns to predict those characteristics.
The researchers took a dataset that Markram and others had collected a few years ago, in which they recorded the expression of 26 genes encoding ion channels in different neuronal types from the rat brain. They also had data classifying those types according to a neuron’s morphology, its electrophysiological properties and its position within the six, anatomically distinct layers of the cortex. They found that, based on the classification data alone, they could predict those previously measured ion channel patterns with 78 per cent accuracy. And when they added in a subset of data about the ion channels to the classification data, as input to their data-mining program, their analytics accuracy was boosted to 87 per cent for the more commonly occurring neuronal types.
According to team member, Felix Schürmann, the increased accuracy of their results shows that it is now possible for analytics to mine rules from a subset of data and use them to improve results without having to measure every aspect of a behavior. Once the rules have been validated in similar, but independently collected datasets, they can now be used to predict the entire complement of functions--here the ion channels presented by a given neuron--based simply on data about that neuron’s morphology, its electrical behavior and a few key genes that it expresses.
The researchers now hope to use analytics to derive such rules that express the roles of different genes in regulating transcription processes. The teams reasoning is that if rules exist for ion channels, they are also likely to exist for other aspects of brain organization. For example, the researchers believe it will be possible to predict where synapses are likely to form in neural networks, based on information about the ratio of neuronal types in that network. Knowledge of such rules could therefore usher in a new era of predictive biology, and accelerate progress towards a better understanding of the brain, as well as more manageable models the brain.
Further Reading
Subscribe to:
Posts (Atom)