With Microsoft's Surface, Google's Nexus, Barnes & Noble's Nook and Amazon's Fire all copying Apple's iPad by using a battery-draining full-color LCD display, analysts were predicting the demise E-Ink's paper-white B&W display that extends battery life for eBooks from days to weeks. Nevertheless, the E-Ink display continues to be used by ultra-inexpensive and feather-light eBooks, as evidenced by Epson's recent release of a complete integrated controller module which simplifies the eBook designer's job: R. Colin Johnson
Just add this inexpensive E-Ink controller to your design for quick-and-easy eBook devices whose battery life lasts for weeks, and whose weight and cost is a fraction of Apple's iPad and all the look-alikes from Microsoft, Google, Amazon, Barnes&Noble and the legions of other copy-cats.
Here is what Epson says about its E-Ink controller: Seiko Epson Corporation ("Epson", TSE: 6724), a global supplier of imaging products and semiconductor solutions, today announced a newly developed e-paper display (EPD) controller module. The S4E5B001B000A00, which measures only 2.3 cm x 2.3 cm, consists of the key electronics elements necessary for an E Ink EPD-based product such as Epson's high-performance EPD controller (S1D13522), a power management IC (PMIC), 4-Mbit flash memory for command/waveform storage, and an on-board 26-Mhz crystal oscillator.
The EPD controller (S1D13522) mounted on the module is an industry-proven multi-pipeline EPD controller that has already been widely adopted by various e-book manufacturers. It reduces CPU overheads for EPD applications by allowing multi-regional and concurrent display updates, picture-in--picture, rotation, transparency and hardware cursor functions to achieve the optimal display experience.
Epson believes that the S4E5B001B000A00 is an ideal choice for any customer who seeks to easily develop EPD applications without needing to go through a complex technical learning process, and which has the potential to accelerate time-to-market for EPD-related products.
"We applaud Epson for constantly innovating their semiconductor offerings," said Sri Peruvemba, CMO, E Ink. "The new module will reduce design time and complexity for customers and will help open up new markets."
"In recent years, Epson has mainly concentrated on providing unique EPD controller products to e-book customers. Our next goal is to expand our product lineup to include industrial and other promising new applications," said Kazuhiro Takenaka, deputy chief operating officer of Epson's Microdevices Operations Division. "Customers are already designing products using our module and we expect to see many more opportunities as we move forward."
To further assist customers integrate EPDs into their products CPU companies expect to release reference designs featuring the S4E5B001B000A00 in the fourth quarter of 2012. Samples of the S4E5B001B000A00 are available from today, and production is expected to start in December 2012.
Epson plans to demo the S4E5B001B000A00 at Electronica 2012 in Munich, Germany (November 13 to 16).
Further Reading
Wednesday, October 31, 2012
Tuesday, October 30, 2012
#MARKETING: "Smartphone/Tablet Convergence at Apple/Google/Microsoft"
Unless you check carefully, its starting to be hard to tell whether you are on Apple's, Google's or Microsoft's websites, now that each has embraced the post-PC era with smartphones and tablets which all look pretty much alike: R. Colin Johnson
Google's new smartphone, mini-tablet, full-size tablet line-up pictured here (made by LG, Asustek and Samsung, respectively) is hard to distinguish from Apple's iPhone, iPad Mini and iPad full-size (all made by Foxconn). All Google needs now is a music-player, like the iPod, and the similarities would be nearly indistinguishable. (Oh, and don't forget how Google Play is modeled on iTunes!)
Here is what Google says about its latest iPad lookalike: With a dazzling 2560-by-1600 high-resolution display and powerful graphics processor, Nexus 10 places you right inside the action with picture-perfect performance. Over 4-million pixels in your hands means that text is sharper, HD movies are more vivid and photos look as clear as the day you took them.
All your favorite Google Play content looks great on Nexus 10. Magazines come alive with rich images and razor sharp text. With movies and TV available in full 1080p, you’ll always have the best seat in the house. Nexus 10 was made to share. Just turn on your tablet and tap your photo to sign in and get access to your own private homescreen, apps, email, photos, storage, and more.
Personalizing your homescreen is easy. Choose your own wallpaper, add your favorite apps and games from Google Play, create folders, and arrange beautiful new widgets just the way you like them -- it’s as easy as drag-and-drop.
Video chat with Google+ Hangouts.
Nexus 10 lets you video chat with up to nine friends at once with Google+ Hangouts, and with a 1.9 megapixel front facing camera and microphone noise-cancellation, you’ll always come through loud and clear.
Updated to bring you the web in HD, Chrome is now better than ever on Nexus 10. Advanced MIMO WiFi and accelerated page loading give you web browsing speeds up to 4x faster* than normal WiFi.
View, edit and share your photos on Nexus 10. The ultra high-resolution display lets you relive each moment in stunning detail, while powerful new editing tools make it easy to touch up your best shots before sharing them with friends and family with just a few taps.
Further Reading
Google's new smartphone, mini-tablet, full-size tablet line-up pictured here (made by LG, Asustek and Samsung, respectively) is hard to distinguish from Apple's iPhone, iPad Mini and iPad full-size (all made by Foxconn). All Google needs now is a music-player, like the iPod, and the similarities would be nearly indistinguishable. (Oh, and don't forget how Google Play is modeled on iTunes!)
Here is what Google says about its latest iPad lookalike: With a dazzling 2560-by-1600 high-resolution display and powerful graphics processor, Nexus 10 places you right inside the action with picture-perfect performance. Over 4-million pixels in your hands means that text is sharper, HD movies are more vivid and photos look as clear as the day you took them.
All your favorite Google Play content looks great on Nexus 10. Magazines come alive with rich images and razor sharp text. With movies and TV available in full 1080p, you’ll always have the best seat in the house. Nexus 10 was made to share. Just turn on your tablet and tap your photo to sign in and get access to your own private homescreen, apps, email, photos, storage, and more.
Personalizing your homescreen is easy. Choose your own wallpaper, add your favorite apps and games from Google Play, create folders, and arrange beautiful new widgets just the way you like them -- it’s as easy as drag-and-drop.
Video chat with Google+ Hangouts.
Nexus 10 lets you video chat with up to nine friends at once with Google+ Hangouts, and with a 1.9 megapixel front facing camera and microphone noise-cancellation, you’ll always come through loud and clear.
Updated to bring you the web in HD, Chrome is now better than ever on Nexus 10. Advanced MIMO WiFi and accelerated page loading give you web browsing speeds up to 4x faster* than normal WiFi.
View, edit and share your photos on Nexus 10. The ultra high-resolution display lets you relive each moment in stunning detail, while powerful new editing tools make it easy to touch up your best shots before sharing them with friends and family with just a few taps.
Further Reading
Monday, October 29, 2012
#ROBOTICS: "Jumping Robots to Extend Battery Life"
Robots could drastically extend their battery life by jumping instead of walking or rolling as they do today--especially if robots adopt a unique two-step 'stutter jump' technique discovered by researchers at the Georgia Institute of Technology: R. Colin Johnson
Georgia Tech Assistant Professor Daniel Goldman (left) and Graduate Student Jeffrey Aguilar examine a simple robot built to the dynamics of jumping. The research could lead to reduced power consumption by hopping robots. (Click image for high-resolution version. Credit: Gary Meek)
Here is what Georgia Tech says about jumping robots: Stutter Jumping: Study of 20,000 Jumps Shows How a Hopping Robot Could Conserve its Energy
A new study shows that jumping can be much more complicated than it might seem. In research that could extend the range of future rescue and exploration robots, scientists have found that hopping robots could dramatically reduce the amount of energy they use by adopting a unique two-part “stutter jump.”
Taking a short hop before a big jump could allow spring-based “pogo-stick” robots to reduce their power consumption as much as ten-fold. The formula for the two-part jump was discovered by analyzing nearly 20,000 jumps made by a simple laboratory robot under a wide range of conditions.
“If we time things right, the robot can jump with a tenth of the power required to jump to the same height under other conditions,” said Daniel Goldman, an assistant professor in the School of Physics at the Georgia Institute of Technology. “In the stutter jumps, we can move the mass at a lower frequency to get off the ground. We achieve the same takeoff velocity as a conventional jump, but it is developed over a longer period of time with much less power.”
The research was reported October 26 in the journal Physical Review Letters. The work was supported by the Army Research Laboratory’s MAST program, the Army Research Office, the National Science Foundation, the Burroughs Wellcome Fund and the GEM Fellowship.
Jumping is an important means of locomotion for animals, and could be important to future generations of robots. Jumping has been extensively studied in biological organisms, which use stretched tendons to store energy.
The Georgia Tech research into robot jumping began with a goal of learning how hopping robots would interact with complicated surfaces – such as sand, granular materials or debris from a disaster. Goldman quickly realized he’d need to know more about the physics of jumping to separate the surface issues from the factors controlled by the dynamics of jumping.
Inspired by student-directed experiments on the dynamics of hopping in his nonlinear dynamics and chaos class, Goldman asked Jeffrey Aguilar, a graduate student in the George W. Woodruff School of Mechanical Engineering, to construct the simplest jumping robot. Aguilar built a one-kilogram robot that is composed of a spring beneath a mass capable of moving up and down on a thrust rod. Aguilar used computer controls to vary the starting position of the mass on the rod, the amplitude of the motion, the pattern of movement and the frequency of movement applied by an actuator built into the robot’s mass. A high-speed camera and a contact sensor measured and recorded the height of each jump.
The researchers expected to find that the optimal jumping frequency would be related to the resonant frequency of the spring and mass system, but that turned out not to be true. Detailed evaluation of the jumps showed that frequencies above and below the resonance provided optimal jumping – and additional analysis revealed what the researchers called the “stutter jump.”
“The preparatory hop allows the robot to time things such that it can use a lower energy to get to the same jump height,” Goldman explained. “You really don’t have to move the mass rapidly to get a good jump.”
The amount of energy that can be stored in batteries can limit the range and duration of robotic missions, so the stutter jump could be helpful for small robots that have limited power. Optimizing the efficiency of jumping could therefore allow the robots to complete longer and more complex missions.
But because it requires longer to perform than a simple jump, the two-step jump may not be suitable for all conditions.
“If you’re a small robot and you want to jump over an obstacle, you could save energy by using the stutter jump even though that would take longer,” said Goldman. “But if a hazard is threatening, you may need to expend the additional energy to make a quick jump to get out of the way.”
For the future, Goldman and his research team plan to study how complicated surfaces affect jumping. They are currently studying the effects of sand, and will turn to other substrates to develop a better understanding of how exploration or rescue robots can hop through them.
Goldman’s past work has focused on the lessons learned from the locomotion of biological systems, so the team is also interested in what the robot can teach them about how animals jump. “What we have learned here can function as a hypothesis for biological systems, but it may not explain everything,” he said.
The simple jumping robot turned out to be a useful system to study, not only because of the interesting behaviors that turned up, but also because the results were counter to what the researchers had expected.
“In physics, we often study the steady-state solution,” Goldman noted. “If we wait enough time for the transient phenomena to die off, then we can study what’s left. It turns out that in this system, we really care about the transients.”
This research is supported by the Army Research Laboratory.
Further Reading
Georgia Tech Assistant Professor Daniel Goldman (left) and Graduate Student Jeffrey Aguilar examine a simple robot built to the dynamics of jumping. The research could lead to reduced power consumption by hopping robots. (Click image for high-resolution version. Credit: Gary Meek)
Here is what Georgia Tech says about jumping robots: Stutter Jumping: Study of 20,000 Jumps Shows How a Hopping Robot Could Conserve its Energy
A new study shows that jumping can be much more complicated than it might seem. In research that could extend the range of future rescue and exploration robots, scientists have found that hopping robots could dramatically reduce the amount of energy they use by adopting a unique two-part “stutter jump.”
Taking a short hop before a big jump could allow spring-based “pogo-stick” robots to reduce their power consumption as much as ten-fold. The formula for the two-part jump was discovered by analyzing nearly 20,000 jumps made by a simple laboratory robot under a wide range of conditions.
“If we time things right, the robot can jump with a tenth of the power required to jump to the same height under other conditions,” said Daniel Goldman, an assistant professor in the School of Physics at the Georgia Institute of Technology. “In the stutter jumps, we can move the mass at a lower frequency to get off the ground. We achieve the same takeoff velocity as a conventional jump, but it is developed over a longer period of time with much less power.”
The research was reported October 26 in the journal Physical Review Letters. The work was supported by the Army Research Laboratory’s MAST program, the Army Research Office, the National Science Foundation, the Burroughs Wellcome Fund and the GEM Fellowship.
Jumping is an important means of locomotion for animals, and could be important to future generations of robots. Jumping has been extensively studied in biological organisms, which use stretched tendons to store energy.
The Georgia Tech research into robot jumping began with a goal of learning how hopping robots would interact with complicated surfaces – such as sand, granular materials or debris from a disaster. Goldman quickly realized he’d need to know more about the physics of jumping to separate the surface issues from the factors controlled by the dynamics of jumping.
Inspired by student-directed experiments on the dynamics of hopping in his nonlinear dynamics and chaos class, Goldman asked Jeffrey Aguilar, a graduate student in the George W. Woodruff School of Mechanical Engineering, to construct the simplest jumping robot. Aguilar built a one-kilogram robot that is composed of a spring beneath a mass capable of moving up and down on a thrust rod. Aguilar used computer controls to vary the starting position of the mass on the rod, the amplitude of the motion, the pattern of movement and the frequency of movement applied by an actuator built into the robot’s mass. A high-speed camera and a contact sensor measured and recorded the height of each jump.
The researchers expected to find that the optimal jumping frequency would be related to the resonant frequency of the spring and mass system, but that turned out not to be true. Detailed evaluation of the jumps showed that frequencies above and below the resonance provided optimal jumping – and additional analysis revealed what the researchers called the “stutter jump.”
“The preparatory hop allows the robot to time things such that it can use a lower energy to get to the same jump height,” Goldman explained. “You really don’t have to move the mass rapidly to get a good jump.”
The amount of energy that can be stored in batteries can limit the range and duration of robotic missions, so the stutter jump could be helpful for small robots that have limited power. Optimizing the efficiency of jumping could therefore allow the robots to complete longer and more complex missions.
But because it requires longer to perform than a simple jump, the two-step jump may not be suitable for all conditions.
“If you’re a small robot and you want to jump over an obstacle, you could save energy by using the stutter jump even though that would take longer,” said Goldman. “But if a hazard is threatening, you may need to expend the additional energy to make a quick jump to get out of the way.”
For the future, Goldman and his research team plan to study how complicated surfaces affect jumping. They are currently studying the effects of sand, and will turn to other substrates to develop a better understanding of how exploration or rescue robots can hop through them.
Goldman’s past work has focused on the lessons learned from the locomotion of biological systems, so the team is also interested in what the robot can teach them about how animals jump. “What we have learned here can function as a hypothesis for biological systems, but it may not explain everything,” he said.
The simple jumping robot turned out to be a useful system to study, not only because of the interesting behaviors that turned up, but also because the results were counter to what the researchers had expected.
“In physics, we often study the steady-state solution,” Goldman noted. “If we wait enough time for the transient phenomena to die off, then we can study what’s left. It turns out that in this system, we really care about the transients.”
This research is supported by the Army Research Laboratory.
Further Reading
#ROBOTICS: "NASA Robot Challenge Offers $1.5 Million"
Autonomous robots capable of retrieving geological samples on their own without the need for remote control operators is the goal of the 2013 NASA Robot Challenge which offers a $1.5 million purse: R. Colin Johnson
Here is what NASA says about its Robot Challenge: NASA and the Worcester Polytechnic Institute (WPI) in Worcester, Mass., have opened registration and are seeking teams to compete in next year's robot technology demonstration competition, which offers as much as $1.5 million in prize money.
During the 2013 NASA-WPI Sample Return Robot Challenge, teams will compete to demonstrate a robot can locate and retrieve geologic samples from a wide and varied terrain without human control. The objective of the competition is to encourage innovations in automatic navigation and robotic manipulator technologies. Innovations stemming from this challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. The competition is planned for June 2013 in Worcester, Mass., attracting competitors from industry and academia nationwide.
NASA is providing the prize money to the winning team as part of the agency's Centennial Challenges competitions, which seek unconventional solutions to problems of interest to the agency and the nation. While NASA provides the prize purse, the competitions are managed by non-profit organizations that cover the cost of operations through commercial or private sponsorships.
"We've opened registration and are eager to see returning teams, and new challengers, enter this second Sample Return Robot Challenge," said NASA Space Technology Program Director Michael Gazarik at the agency's Headquarters in Washington. "Contests like NASA's Centennial Challenges are an excellent example of government sparking the engine of American innovation and prosperity through competition while keeping our nation on the cutting edge of advanced robotics technology. Teams from academia, industry and even citizen-inventors are all invited to join the competition and help NASA solve real technology needs. With a $1.5 million prize purse, we're looking forward to seeing some great technology that will enable our future missions and advance robotics right here in America."
The first Sample Return Robot Challenge, which took place in June, also was held at WPI. While almost a dozen teams entered the competition, none qualified to compete for the prize purse. NASA and WPI are partnering again to repeat and advance the competition, which is expected to draw more competitors and greater technological innovation from among the teams.
"We're honored and excited to once again host the Sample Return Robot Challenge," said WPI President and CEO Dennis Berkey. "This year, 7,000 people turned out to watch the competition, which was the first of its kind on the East Coast, and to enjoy WPI's fantastic Touch Tomorrow Festival of Science, Technology and Robots. This university is a hub of expertise and innovation within the area of robotics, and it's a pleasure to engage people of all ages and backgrounds in the wonders of this competition, this festival, and this emerging field."
There have been 23 NASA Centennial Challenges competition events since 2005, and through this program NASA has awarded more than $6 million to 15 different challenge-winning teams. Competitors have included private companies, student groups and independent inventors working outside the traditional aerospace industry. Unlike contracts or grants, prizes are awarded only after solutions are successfully demonstrated.
WPI is one of the only universities to offer bachelor's, master's, and doctoral degrees in robotics engineering. In 2007, the university was the first in the nation to offer a bachelor's degree program in this area. Through its Robotics Resource Center, WPI supports robotics projects, teams, events and K-12 outreach programs. Each year, WPI manages at least seven competitive robotics tournaments and also has sponsored programs that foster the use of robots to solve important societal problems and encourage consideration of the societal implications of this new area of technology.
Further Reading
Here is what NASA says about its Robot Challenge: NASA and the Worcester Polytechnic Institute (WPI) in Worcester, Mass., have opened registration and are seeking teams to compete in next year's robot technology demonstration competition, which offers as much as $1.5 million in prize money.
During the 2013 NASA-WPI Sample Return Robot Challenge, teams will compete to demonstrate a robot can locate and retrieve geologic samples from a wide and varied terrain without human control. The objective of the competition is to encourage innovations in automatic navigation and robotic manipulator technologies. Innovations stemming from this challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. The competition is planned for June 2013 in Worcester, Mass., attracting competitors from industry and academia nationwide.
NASA is providing the prize money to the winning team as part of the agency's Centennial Challenges competitions, which seek unconventional solutions to problems of interest to the agency and the nation. While NASA provides the prize purse, the competitions are managed by non-profit organizations that cover the cost of operations through commercial or private sponsorships.
"We've opened registration and are eager to see returning teams, and new challengers, enter this second Sample Return Robot Challenge," said NASA Space Technology Program Director Michael Gazarik at the agency's Headquarters in Washington. "Contests like NASA's Centennial Challenges are an excellent example of government sparking the engine of American innovation and prosperity through competition while keeping our nation on the cutting edge of advanced robotics technology. Teams from academia, industry and even citizen-inventors are all invited to join the competition and help NASA solve real technology needs. With a $1.5 million prize purse, we're looking forward to seeing some great technology that will enable our future missions and advance robotics right here in America."
The first Sample Return Robot Challenge, which took place in June, also was held at WPI. While almost a dozen teams entered the competition, none qualified to compete for the prize purse. NASA and WPI are partnering again to repeat and advance the competition, which is expected to draw more competitors and greater technological innovation from among the teams.
"We're honored and excited to once again host the Sample Return Robot Challenge," said WPI President and CEO Dennis Berkey. "This year, 7,000 people turned out to watch the competition, which was the first of its kind on the East Coast, and to enjoy WPI's fantastic Touch Tomorrow Festival of Science, Technology and Robots. This university is a hub of expertise and innovation within the area of robotics, and it's a pleasure to engage people of all ages and backgrounds in the wonders of this competition, this festival, and this emerging field."
There have been 23 NASA Centennial Challenges competition events since 2005, and through this program NASA has awarded more than $6 million to 15 different challenge-winning teams. Competitors have included private companies, student groups and independent inventors working outside the traditional aerospace industry. Unlike contracts or grants, prizes are awarded only after solutions are successfully demonstrated.
WPI is one of the only universities to offer bachelor's, master's, and doctoral degrees in robotics engineering. In 2007, the university was the first in the nation to offer a bachelor's degree program in this area. Through its Robotics Resource Center, WPI supports robotics projects, teams, events and K-12 outreach programs. Each year, WPI manages at least seven competitive robotics tournaments and also has sponsored programs that foster the use of robots to solve important societal problems and encourage consideration of the societal implications of this new area of technology.
Further Reading
Friday, October 26, 2012
#TABLETS: "For the Rest of Us Run Windows 8"
Since the tablet mania started by Apple's iPad and Google's Nexus tablets, Windows users have been suffering from tablet-envy, but no more, as now the largest user-base in the world has begun the transition from the PC- to the tablet-computing era: R. Colin Johnson
Here is what Windows8Center says about tablet-envy: Surface represents Microsoft’s 21st-century approach to computing, arguably as innovative as the previous century’s move from Windows from DOS. The touch-enabled, tile-based user interface answers Apple’s iPad and Google’s Android.
More importantly, it represents the natural evolution of the Windows user – and developer– base into tablet computing. Surface moves beyond PC-style direct-access to files–refocusing users on content, not the technical expertise needed to access and manipulate it.
Further Reading
Here is what Windows8Center says about tablet-envy: Surface represents Microsoft’s 21st-century approach to computing, arguably as innovative as the previous century’s move from Windows from DOS. The touch-enabled, tile-based user interface answers Apple’s iPad and Google’s Android.
More importantly, it represents the natural evolution of the Windows user – and developer– base into tablet computing. Surface moves beyond PC-style direct-access to files–refocusing users on content, not the technical expertise needed to access and manipulate it.
Further Reading
#WIRELESS: "Telit's M2M Adopts LTE at 100Mbps"
The machine-to-machine (M2M) communications market got a boost to 100-Mbits per second by virtue of Telit's new LTE-based cellular modem technology: R. Colin Johnson
Here is what Telit says about its new 100Mbps M2M modem: Telit Wireless Solutions, a leading global vendor of high-quality machine-to-machine (M2M) modules and value-added services, today announced the introduction of the LE920 LTE module for European and North American OEM automotive and aftermarket segments. The new 920 form factor measures 34x40x2.8mm on a 198-pad LGA automotive-grade package. The product delivers 100Mbps-down and 50Mbps-up data rates on LTE networks and is fully fallback compatible with DC-HSPA+ delivering up to 42Mbs-down and 5.76Mbps-up where available. Quad-band GSM/GPRS and EDGE performance ensure the module connects even in remote areas devoid of 3G or 4G coverage. Equipped with a high performance multi-constellation GPS plus GLONASS receiver, the LE920 module provides superior navigation coverage even in harsh environments and challenging urban canyons with fast and accurate position fixes, making it ideally suited for full-featured integrated navigation systems and location based services delivered through the car’s infotainment system.
Telit’s LE920 includes distinguishing features such as Rx diversity which allows the end-device to be equipped with two distinct cellular antennas improving the quality and reliability of the wireless connectivity in urban areas, undergrounds, and other similar scenarios, making the product ideal for applications such as an in-vehicle hot-spot. Fully voice capable, the LE920 provides full-duplex PCM as well as analog input and output for applications such as a hands-free in-car cellular functionality, fleet management and other vehicle voice gateway applications. For easy integration, the product includes a USB 2.0 high-speed port and device drivers for most Windows and Linux platforms, and will be available in local variants as required for all major LTE carriers and partner networks in North America and Europe.
The LE920 was designed to meet the most current specifications for the European eCall and Russian ERA-Glonass programs. Mandated by the European Commission, all new cars must have an eCall automatic emergency in-vehicle call system installed by 2015. The system is designed to enhance quality and speed of rescue operations in automotive accident response efforts. In case of a crash, the eCall system transfers the necessary accident data to the nearest emergency service response center expediting life saving measures. ERA-Glonass was designed to be compatible with the European eCall standard. The objective of the program is to combine mobile communications and satellite positioning providing for faster assistance in case of automotive collisions. Its infrastructure is planned to be installed in 2013 with systems mandated in all vehicles in 2014. North America has no similar program.
“The LE920 is Telit’s new high-performance automotive product which also introduces to the market our new form-factor for this segment. One which provides our R&D and product management teams with lots of space for future integration of new and valuable features and resources for our automotive customers,” said Dominikus Hierl, chief marketing officer at Telit Wireless Solutions. “With the LE920’s comprehensive and industry-leading LTE multi-band support, the connected vehicle can provide passengers the same comfort and convenience in terms of internet and information access just as they would enjoy at the home or office - with no compromise in performance.”
As the LE920 was conceived specifically for the automotive market, it boasts an extended temperature range, operating from -40°C to +85°C, designed and manufactured under the strict automotive quality standards specified in the ISO/TS16949. Additionally, materials, facilities, and processes applied on the LE920 also comply with Production Part Approval Process or PPAP, a full traceability framework standard adopted by the automotive industry which among other things allows defective parts to be traced back in its full genealogy up the supply chain.
Industry’s only pure-play m2m, Telit creates value by partnering with customers to provide expert guidance and support from concept development through to manufacturing quickly bringing ideas to market in all application areas including the new “smart” space. With service enhanced products in cellular, short-range, and satellite-navigation easily bundled through high-level software interfaces, Telit-powered m2m devices cost less to integrate, maintain, operate, and update with lower price points for bundled products and savings translating into competitive advantage at the time of sale and throughout the operating life of the customer device.
Further Reading
Here is what Telit says about its new 100Mbps M2M modem: Telit Wireless Solutions, a leading global vendor of high-quality machine-to-machine (M2M) modules and value-added services, today announced the introduction of the LE920 LTE module for European and North American OEM automotive and aftermarket segments. The new 920 form factor measures 34x40x2.8mm on a 198-pad LGA automotive-grade package. The product delivers 100Mbps-down and 50Mbps-up data rates on LTE networks and is fully fallback compatible with DC-HSPA+ delivering up to 42Mbs-down and 5.76Mbps-up where available. Quad-band GSM/GPRS and EDGE performance ensure the module connects even in remote areas devoid of 3G or 4G coverage. Equipped with a high performance multi-constellation GPS plus GLONASS receiver, the LE920 module provides superior navigation coverage even in harsh environments and challenging urban canyons with fast and accurate position fixes, making it ideally suited for full-featured integrated navigation systems and location based services delivered through the car’s infotainment system.
Telit’s LE920 includes distinguishing features such as Rx diversity which allows the end-device to be equipped with two distinct cellular antennas improving the quality and reliability of the wireless connectivity in urban areas, undergrounds, and other similar scenarios, making the product ideal for applications such as an in-vehicle hot-spot. Fully voice capable, the LE920 provides full-duplex PCM as well as analog input and output for applications such as a hands-free in-car cellular functionality, fleet management and other vehicle voice gateway applications. For easy integration, the product includes a USB 2.0 high-speed port and device drivers for most Windows and Linux platforms, and will be available in local variants as required for all major LTE carriers and partner networks in North America and Europe.
The LE920 was designed to meet the most current specifications for the European eCall and Russian ERA-Glonass programs. Mandated by the European Commission, all new cars must have an eCall automatic emergency in-vehicle call system installed by 2015. The system is designed to enhance quality and speed of rescue operations in automotive accident response efforts. In case of a crash, the eCall system transfers the necessary accident data to the nearest emergency service response center expediting life saving measures. ERA-Glonass was designed to be compatible with the European eCall standard. The objective of the program is to combine mobile communications and satellite positioning providing for faster assistance in case of automotive collisions. Its infrastructure is planned to be installed in 2013 with systems mandated in all vehicles in 2014. North America has no similar program.
“The LE920 is Telit’s new high-performance automotive product which also introduces to the market our new form-factor for this segment. One which provides our R&D and product management teams with lots of space for future integration of new and valuable features and resources for our automotive customers,” said Dominikus Hierl, chief marketing officer at Telit Wireless Solutions. “With the LE920’s comprehensive and industry-leading LTE multi-band support, the connected vehicle can provide passengers the same comfort and convenience in terms of internet and information access just as they would enjoy at the home or office - with no compromise in performance.”
As the LE920 was conceived specifically for the automotive market, it boasts an extended temperature range, operating from -40°C to +85°C, designed and manufactured under the strict automotive quality standards specified in the ISO/TS16949. Additionally, materials, facilities, and processes applied on the LE920 also comply with Production Part Approval Process or PPAP, a full traceability framework standard adopted by the automotive industry which among other things allows defective parts to be traced back in its full genealogy up the supply chain.
Industry’s only pure-play m2m, Telit creates value by partnering with customers to provide expert guidance and support from concept development through to manufacturing quickly bringing ideas to market in all application areas including the new “smart” space. With service enhanced products in cellular, short-range, and satellite-navigation easily bundled through high-level software interfaces, Telit-powered m2m devices cost less to integrate, maintain, operate, and update with lower price points for bundled products and savings translating into competitive advantage at the time of sale and throughout the operating life of the customer device.
Further Reading
Thursday, October 25, 2012
#CHIPS: "Communications and Automotive Driving Global Chip Market"
Communications and automotive applications are driving the global market for semiconductor chips, according to IC Insights: R. Colin Johnson
The communications and automotive IC markets are forecast to out-pace the growth of the total IC market through 2016 according to data in IC Insights’ soon-to-be-released IC Market Drivers 2013 — A Study of Emerging and Major End-Use Applications Fueling Demand for Integrated Circuits. The communications segment is forecast to see a cumulative average annual growth rate of 14.1% from 2011-2016, almost double the 7.4% CAGR expected for the entire IC market during this timeframe. The automotive IC market is also expected to exceed total IC market growth during this period, growing at an average annual rate of 9.0%.
• The communications IC market is forecast to reach almost $160 billion in 2016, an increase of 94% from 2011. The Asia-Pacific region is forecast to represent 61% of the total communications IC market in 2012, increasing from 59% in 2011.
• At approximately $28.0 billion, the 2016 automotive IC market is forecast to be 53% greater than the size of the automotive IC market in 2011. Europe is the largest regional market for automotive ICs, accounting for 37% of the market in 2012. However, by 2016, the Asia-Pacific automotive market is forecast to be nearly the same size as the European market.
• An aging global population is driving demand for ICs in home healthcare and medical applications within the industrial segment. Analog ICs are forecast to represent 45% of the total industrial IC market in 2012, and are forecast to account for the largest portion of the industrial IC market through 2016.
• The worldwide government/military IC market is forecast to reach $2.46 billion in 2016 but represent only 0.7% of the total IC market at that time, the same percentage as in 2011.
• The computer IC market is forecast to represent 34.0% of the total IC market in 2016, down from 41.7% in 2011. A 12% decline in the computer memory market is expected to cause the total computer IC market to decline by 9% in 2012, the second consecutive year of decline.
• The 2011-2016 consumer IC market is forecast to register a 1.9% CAGR, slowest among all end-use categories and 5.5 points less than the total IC market. Japan, once the stronghold of the consumer electronics business, is forecast to hold less than half the share (22%) of the consumer IC market as compared to the Asia-Pacific region (50%) in 2012.
Further Reading
The communications and automotive IC markets are forecast to out-pace the growth of the total IC market through 2016 according to data in IC Insights’ soon-to-be-released IC Market Drivers 2013 — A Study of Emerging and Major End-Use Applications Fueling Demand for Integrated Circuits. The communications segment is forecast to see a cumulative average annual growth rate of 14.1% from 2011-2016, almost double the 7.4% CAGR expected for the entire IC market during this timeframe. The automotive IC market is also expected to exceed total IC market growth during this period, growing at an average annual rate of 9.0%.
• The communications IC market is forecast to reach almost $160 billion in 2016, an increase of 94% from 2011. The Asia-Pacific region is forecast to represent 61% of the total communications IC market in 2012, increasing from 59% in 2011.
• At approximately $28.0 billion, the 2016 automotive IC market is forecast to be 53% greater than the size of the automotive IC market in 2011. Europe is the largest regional market for automotive ICs, accounting for 37% of the market in 2012. However, by 2016, the Asia-Pacific automotive market is forecast to be nearly the same size as the European market.
• An aging global population is driving demand for ICs in home healthcare and medical applications within the industrial segment. Analog ICs are forecast to represent 45% of the total industrial IC market in 2012, and are forecast to account for the largest portion of the industrial IC market through 2016.
• The worldwide government/military IC market is forecast to reach $2.46 billion in 2016 but represent only 0.7% of the total IC market at that time, the same percentage as in 2011.
• The computer IC market is forecast to represent 34.0% of the total IC market in 2016, down from 41.7% in 2011. A 12% decline in the computer memory market is expected to cause the total computer IC market to decline by 9% in 2012, the second consecutive year of decline.
• The 2011-2016 consumer IC market is forecast to register a 1.9% CAGR, slowest among all end-use categories and 5.5 points less than the total IC market. Japan, once the stronghold of the consumer electronics business, is forecast to hold less than half the share (22%) of the consumer IC market as compared to the Asia-Pacific region (50%) in 2012.
Further Reading
Wednesday, October 24, 2012
#WIRELESS: "Vast Majority of Smartphones to Sport 5GHz WiFi by 2015"
Over 70 percent of smartphones will ship with 5GHz band IEEE 802.11ac WiFi capabilities by 2015, according to Allied Business Intelligence (ABI Research, Oyster Bay, NY): R. Colin Johnson @NextGenLog
Here is what ABI says about WiFi in smartphones: Wi-Fi protocols have changed significantly over the last two to three years and almost every smartphone shipped this year will offer some form of Wi-Fi capabilities. However, a new Wi-Fi protocol will begin to dominate mobile devices soon. New market intelligence from ABI Research projects the IEEE 802.11ac Wi-Fi protocol will begin to conquer the existing protocols (802.11b, g, and n) in the next two to three years.
“The Wi-Fi 802.11ac protocol offers several advantages over the current and most commonly used 802.11n protocol,” says senior analyst Josh Flood. “Firstly, the wireless connection speed will be quicker; the new protocol also offers better range and improved reliability, and superior power consumption. It’s also capable of multiple 2X2 streams and should be particularly good for gaming experiences and HD video streaming on mobile devices.”
The Market Data “Mobile Device Enabling Technologies” provides further details on handset technologies by key regions. Additionally, further technology and regional shipments are presented for the following technologies in mobile handsets and smartphones: GPS, Wi-Fi, Bluetooth, speech recognition, NFC, camera embedded, front facing camera, touchscreens, accelerometers, gyroscopes, altimeters, magnetometers, MEMS microphones, 3-D displays, gesture recognition, and facial recognition. These findings are part of the Mobile Device Technologies Research Service.
Further Reading
Here is what ABI says about WiFi in smartphones: Wi-Fi protocols have changed significantly over the last two to three years and almost every smartphone shipped this year will offer some form of Wi-Fi capabilities. However, a new Wi-Fi protocol will begin to dominate mobile devices soon. New market intelligence from ABI Research projects the IEEE 802.11ac Wi-Fi protocol will begin to conquer the existing protocols (802.11b, g, and n) in the next two to three years.
“The Wi-Fi 802.11ac protocol offers several advantages over the current and most commonly used 802.11n protocol,” says senior analyst Josh Flood. “Firstly, the wireless connection speed will be quicker; the new protocol also offers better range and improved reliability, and superior power consumption. It’s also capable of multiple 2X2 streams and should be particularly good for gaming experiences and HD video streaming on mobile devices.”
The Market Data “Mobile Device Enabling Technologies” provides further details on handset technologies by key regions. Additionally, further technology and regional shipments are presented for the following technologies in mobile handsets and smartphones: GPS, Wi-Fi, Bluetooth, speech recognition, NFC, camera embedded, front facing camera, touchscreens, accelerometers, gyroscopes, altimeters, magnetometers, MEMS microphones, 3-D displays, gesture recognition, and facial recognition. These findings are part of the Mobile Device Technologies Research Service.
Further Reading
Tuesday, October 23, 2012
#TABLETS: "Supply Shortages Limiting iPad Mini Shipments"
Shippments of Apple's new iPad Mini will be limited in 2013 due to shortages in its supply chain, according to DisplaySearch: R. Colin Johnson
Here is what DisplaySearch says about the iPad Mini: Apple revamped its mobile PC lineup on Tuesday, announcing additions to its iPad and MacBook Pro lines. As DisplaySearch had anticipated, the company introduced the iPad Mini, a refreshed iPad (the iPad with Retina Display), and a 13.3” MacBook Pro notebook.
The $329 iPad Mini comes with a 7.9” 1024 x 768 display, dual core A5 processor, and up to 10 hours of battery life. The $499 iPad with Retina Display comes with the same 9.7” 2048 x 1536 display as the new iPad (which is no longer listed on the Apple website) but features an A6 processor, and up to 10 hours of battery life. The iPads can be pre-ordered on October 26 and ship on November 2. The $1,699 MacBook Pro with Retina Display features a 13.3” 2560 x 1600 screen and is currently available.
As is typical, we expect the iPads to be supply constrained initially, especially the iPad Mini with its $329 price. The new low price point is expected to appeal to a wider audience and drive up demand. However, panel supply chain indications point to an even more than typical tightness in the market for the iPad Mini.
Apple is expanding its supplier base with new partners for the iPad Mini. Apple will continue to work with LG Display who is supplying panels to Foxconn for the finished product, and is adding AUO, who will supply panels to Pegatron. However, AUO is having yield issues with the 7.9” panel which is limiting their supply to Pegatron, and in September, AUO shipped just over 100,000 units. The production plan is reach 400,000 units in October, 800,000 units November and 1 million in December. LG Display shipped 300,000 panels in September, and plans to ship 1 million in October, 2.5 million in November, and 3 million in December.
Samsung has been one of the leading panel suppliers for the iPad. In fact when the new iPad was first released, Samsung was the only supplier that could meet production orders with LG Display gradually ramping up to meet demand. However, Samsung and Apple appear to be winding down their relationship most likely due to the legal conflicts the two have been embroiled in recently. In previous iPad launches, LG Display and Samsung have been the main panel suppliers with roughly equal panel production.
Further Reading
Here is what DisplaySearch says about the iPad Mini: Apple revamped its mobile PC lineup on Tuesday, announcing additions to its iPad and MacBook Pro lines. As DisplaySearch had anticipated, the company introduced the iPad Mini, a refreshed iPad (the iPad with Retina Display), and a 13.3” MacBook Pro notebook.
The $329 iPad Mini comes with a 7.9” 1024 x 768 display, dual core A5 processor, and up to 10 hours of battery life. The $499 iPad with Retina Display comes with the same 9.7” 2048 x 1536 display as the new iPad (which is no longer listed on the Apple website) but features an A6 processor, and up to 10 hours of battery life. The iPads can be pre-ordered on October 26 and ship on November 2. The $1,699 MacBook Pro with Retina Display features a 13.3” 2560 x 1600 screen and is currently available.
As is typical, we expect the iPads to be supply constrained initially, especially the iPad Mini with its $329 price. The new low price point is expected to appeal to a wider audience and drive up demand. However, panel supply chain indications point to an even more than typical tightness in the market for the iPad Mini.
Apple is expanding its supplier base with new partners for the iPad Mini. Apple will continue to work with LG Display who is supplying panels to Foxconn for the finished product, and is adding AUO, who will supply panels to Pegatron. However, AUO is having yield issues with the 7.9” panel which is limiting their supply to Pegatron, and in September, AUO shipped just over 100,000 units. The production plan is reach 400,000 units in October, 800,000 units November and 1 million in December. LG Display shipped 300,000 panels in September, and plans to ship 1 million in October, 2.5 million in November, and 3 million in December.
Samsung has been one of the leading panel suppliers for the iPad. In fact when the new iPad was first released, Samsung was the only supplier that could meet production orders with LG Display gradually ramping up to meet demand. However, Samsung and Apple appear to be winding down their relationship most likely due to the legal conflicts the two have been embroiled in recently. In previous iPad launches, LG Display and Samsung have been the main panel suppliers with roughly equal panel production.
Further Reading
Monday, October 22, 2012
#ROBOTICS: "LG Home-Bot Going Mainstream"
Robotics is going mainstream thanks to LG's worldwide introduction of its Home-Bot which look eerily similar to iRobot's Roomba: R. Colin Johnson
Here is what LG says about its robotic vacuum cleaner: LG Electronics’ HOM-BOT SQUARE robotic vacuum cleaner made its global debut in Place Beaubourg, Paris, where it demonstrated to an international audience what the next generation of robotic cleaning technology would bring.
“Helping consumers lead more convenient lives with smart technology is what LG is all about,” said Moon-bum Shin, Executive Vice President and CEO of the LG Electronics Home Appliance Company. “HOM-BOT SQUARE is contributing toward this goal in a large way on the strength of its innovation and competitiveness in this growing market.”
With sharp right-angles being found in practically every building in the world, LG engineers based the new HOM-BOT on a square, different from the circular design of most robot cleaners. The cleaner’s unique square design, hi-tech sensors and newly improved brushes (1.5cm longer than those in the previous model) are collectively called Corner Master and enable HOM-BOT SQUARE to more effectively reach areas that other cleaners simply ignore.
Corner-detecting sensors supply the cleaner with spatial information, telling it when the edge of the room has been reached, when to turn and when to stop. Sensitive Dual Eye 2.0™ camera sensors scan the floor, sampling multiple images per second and then analyze the information to generate an accurate map of the space – even with the lights off. Onboard ultrasonic and infrared sensors allow the HOM-BOT SQUARE to detect and easily avoid obstacles in its path.
LG’s HOM-BOT SQUARE also features Turbo Mode, which allows the user to manually set cleaning functions to the specific requirements of their flooring. And Smart Turbo Mode enables the cleaner to detect the type of flooring and change its own settings automatically.
Visitors to LG’s HOM-BOT SQUARE launch event in France were given the opportunity to play a live version of LG’s dust-killing internet game. Participants used a remote to navigate the vacuum cleaner around a course and vacuum up simulated dust characters. Those who “caught” the most characters in the allocated time took home a new HOM-BOT SQUARE cleaner.
Launching first in France, HOM-BOT SQUARE will be available in other European markets in the fourth quarter followed by its global launch in 2013.
Key Specifications for LG’s new HOM-BOT SQUARE:
Corner Master
Dual Eye 2.0™
Easy-out Dust Bin
Low noise level: 60 dBA
HEPA 11 Filter
Smart Turbo
Learning Function
Voice Guidance
Long-lasting Battery
Further Reading
Here is what LG says about its robotic vacuum cleaner: LG Electronics’ HOM-BOT SQUARE robotic vacuum cleaner made its global debut in Place Beaubourg, Paris, where it demonstrated to an international audience what the next generation of robotic cleaning technology would bring.
“Helping consumers lead more convenient lives with smart technology is what LG is all about,” said Moon-bum Shin, Executive Vice President and CEO of the LG Electronics Home Appliance Company. “HOM-BOT SQUARE is contributing toward this goal in a large way on the strength of its innovation and competitiveness in this growing market.”
With sharp right-angles being found in practically every building in the world, LG engineers based the new HOM-BOT on a square, different from the circular design of most robot cleaners. The cleaner’s unique square design, hi-tech sensors and newly improved brushes (1.5cm longer than those in the previous model) are collectively called Corner Master and enable HOM-BOT SQUARE to more effectively reach areas that other cleaners simply ignore.
Corner-detecting sensors supply the cleaner with spatial information, telling it when the edge of the room has been reached, when to turn and when to stop. Sensitive Dual Eye 2.0™ camera sensors scan the floor, sampling multiple images per second and then analyze the information to generate an accurate map of the space – even with the lights off. Onboard ultrasonic and infrared sensors allow the HOM-BOT SQUARE to detect and easily avoid obstacles in its path.
LG’s HOM-BOT SQUARE also features Turbo Mode, which allows the user to manually set cleaning functions to the specific requirements of their flooring. And Smart Turbo Mode enables the cleaner to detect the type of flooring and change its own settings automatically.
Visitors to LG’s HOM-BOT SQUARE launch event in France were given the opportunity to play a live version of LG’s dust-killing internet game. Participants used a remote to navigate the vacuum cleaner around a course and vacuum up simulated dust characters. Those who “caught” the most characters in the allocated time took home a new HOM-BOT SQUARE cleaner.
Launching first in France, HOM-BOT SQUARE will be available in other European markets in the fourth quarter followed by its global launch in 2013.
Key Specifications for LG’s new HOM-BOT SQUARE:
Corner Master
Dual Eye 2.0™
Easy-out Dust Bin
Low noise level: 60 dBA
HEPA 11 Filter
Smart Turbo
Learning Function
Voice Guidance
Long-lasting Battery
Further Reading
Friday, October 19, 2012
#ALGORITHMS: "Smart Framework Enables Internet-of-Things"
By the end of the decade, trillions of devices on the Internet-of-Things will dominate global communications with machine-to-machine (M2M) transactions. Today each M2M service provider has proprietary protocols, but Intel is aiming to standardize the those communications with a cookbook-style technology for brewing up interoperable connections on the Internet-of-Things: R. Colin Johnson
Cloud services will extract the ultimate value from the Internet of Things by deriving value from data captured at every step in the system—from sensors to gateways to cloud.
Here is what Go-Parallel says about Intel's efforts: A new architectural framework for embedded devices on the quickly growing Internet of Things has been released by Intel with its McAfee and Wind River subsidiaries.
The Intelligent Systems Framework (ISF) aims to facilitate easier coordinated use of multiple Atom, Core and Xeon processors in distributed embedded systems. ISF works with all the serial and parallel programing tools Intel already offers, plus adds numerous enhancements to virtualization, trusted execution and remote-management specifically supporting machine-to-machine (M2M) interactions among devices on the Internet of Things.
Industry researcher IDC predicts that by 2015, a third of all connected systems will be intelligent, representing a $2 trillion market. By the end of the decade, analysts predict trillions of embedded devices will be addressable over the Internet, offering incredible business opportunities. The Intelligent Systems Framework, in cooperation with McAfee security and Wind River real time support, aims to provide a standardized and scalable way to grow the Internet of Things.
Further Reading
Cloud services will extract the ultimate value from the Internet of Things by deriving value from data captured at every step in the system—from sensors to gateways to cloud.
Here is what Go-Parallel says about Intel's efforts: A new architectural framework for embedded devices on the quickly growing Internet of Things has been released by Intel with its McAfee and Wind River subsidiaries.
The Intelligent Systems Framework (ISF) aims to facilitate easier coordinated use of multiple Atom, Core and Xeon processors in distributed embedded systems. ISF works with all the serial and parallel programing tools Intel already offers, plus adds numerous enhancements to virtualization, trusted execution and remote-management specifically supporting machine-to-machine (M2M) interactions among devices on the Internet of Things.
Industry researcher IDC predicts that by 2015, a third of all connected systems will be intelligent, representing a $2 trillion market. By the end of the decade, analysts predict trillions of embedded devices will be addressable over the Internet, offering incredible business opportunities. The Intelligent Systems Framework, in cooperation with McAfee security and Wind River real time support, aims to provide a standardized and scalable way to grow the Internet of Things.
Further Reading
Thursday, October 18, 2012
#MATERIALS: "ST Embraces Next-Gen Silicon-on-Insulator"
Silicon-on-insulator (SOI) wafers have a buried layer of insulating silicon-dioxide atop of which transistors are fabricated, adding to the cost of chips since SOI wafers are more expensive, but providing isolation from the substrate and nearby devices. However, it took the innovation of adding fully-depleted transistors (FD-SOI) to take the technology mainstream, since FD-SOI enables leakage currents that rival Intel's FinFETs, allowing chip makers to 'catch up with Intel." STMicroelectronics is the first major chip maker to commit to FD-SOI and in cooperation with SOI wafer provider Soitec has just offered its 28-nanometer FD-SOI process to European researchers: R. Colin Johnson
Here is what ST says about its FD-SOI process: STMicroelectronics (NYSE:STM), Soitec (Euronext) and CMP (Circuits Multi Projets) today announced that ST’s CMOS 28nm Fully Depleted Silicon-On-Insulator (FD-SOI) process, which uses innovative silicon substrates from Soitec, is now available for prototyping to universities, research labs and design companies through the silicon brokerage services provided by CMP. ST is releasing this process technology to third parties as it nears completion of its first commercial wafers.
The introduction in CMP’s catalogue of ST’s 28nm FD-SOI CMOS process builds on the successful collaboration that has allowed universities and design firms to access previous CMOS generations including 45nm (introduced in 2008), 65nm (introduced in 2006), 90nm (introduced in 2004), and 130nm (introduced in 2003). CMP’s clients also have access to 65nm and 130nm SOI (Silicon-On-Insulator), as well as 130nm SiGe processes from STMicroelectronics. For example, 170 universities and other companies have received the design rules and design kits for the ST 90nm CMOS process, and more than 200 universities and companies have received the design rules and design kits for the ST 65nm bulk and SOI CMOS processes.
Since CMP started offering the ST 28nm CMOS bulk technology in 2011, some 60 universities and microelectronics companies have received the design rules and design kits and 16 integrated circuits (ICs) have already been manufactured.
“There has been a great interest in designing ICs using these processes, with about 300 projects having been designed in 90nm (phased out in 2009), and more than 300 already in bulk 65nm,” said Bernard Courtois, Director of CMP. “In addition, more than 60 projects have already been designed in 65nm SOI and it is interesting to note that many top universities in Europe, USA/Canada and Asia have already taken advantage of the collaboration between CMP and ST.”
The CMP multi-project wafer service allows organizations to obtain small quantities--typically from a few dozens to a few thousand units--of advanced ICs. The cost of the 28nm FD-SOI CMOS process has been fixed to 18,000 €/mm2, with a minimum of 1mm2.
“With the first designs in FD-SOI technology already in the pipeline, the time is right to make the technology available to the research communities. Our FD-SOI manufacturing process allows existing designs to be quickly and easily ported to FD-SOI where significant power and performance benefit can be realized,” said Philippe Magarshack, Executive Vice President, Design Enablement and Services, STMicroelectronics. “In addition, ensuring that universities have access to our leading-edge technologies can help us attract the best young engineers as part of our commitment to remain a technology leader on a long-term basis.”
“Our partnership with STMicroelectronics and CMP is an additional example of Soitec’s commitment to providing differentiated materials solutions to the open market, supporting the continual expansion of the FD-SOI ecosystem and users of advanced technologies,” said Steve Longoria, senior vice president of worldwide strategic business development for Soitec. “Through this partnership we will see new and innovative products based on Soitec's FD-SOI materials, as a result of providing universities and other customers with a proven path for developing and testing next-generation integrated circuits.”
Further Reading
Here is what ST says about its FD-SOI process: STMicroelectronics (NYSE:STM), Soitec (Euronext) and CMP (Circuits Multi Projets) today announced that ST’s CMOS 28nm Fully Depleted Silicon-On-Insulator (FD-SOI) process, which uses innovative silicon substrates from Soitec, is now available for prototyping to universities, research labs and design companies through the silicon brokerage services provided by CMP. ST is releasing this process technology to third parties as it nears completion of its first commercial wafers.
The introduction in CMP’s catalogue of ST’s 28nm FD-SOI CMOS process builds on the successful collaboration that has allowed universities and design firms to access previous CMOS generations including 45nm (introduced in 2008), 65nm (introduced in 2006), 90nm (introduced in 2004), and 130nm (introduced in 2003). CMP’s clients also have access to 65nm and 130nm SOI (Silicon-On-Insulator), as well as 130nm SiGe processes from STMicroelectronics. For example, 170 universities and other companies have received the design rules and design kits for the ST 90nm CMOS process, and more than 200 universities and companies have received the design rules and design kits for the ST 65nm bulk and SOI CMOS processes.
Since CMP started offering the ST 28nm CMOS bulk technology in 2011, some 60 universities and microelectronics companies have received the design rules and design kits and 16 integrated circuits (ICs) have already been manufactured.
“There has been a great interest in designing ICs using these processes, with about 300 projects having been designed in 90nm (phased out in 2009), and more than 300 already in bulk 65nm,” said Bernard Courtois, Director of CMP. “In addition, more than 60 projects have already been designed in 65nm SOI and it is interesting to note that many top universities in Europe, USA/Canada and Asia have already taken advantage of the collaboration between CMP and ST.”
The CMP multi-project wafer service allows organizations to obtain small quantities--typically from a few dozens to a few thousand units--of advanced ICs. The cost of the 28nm FD-SOI CMOS process has been fixed to 18,000 €/mm2, with a minimum of 1mm2.
“With the first designs in FD-SOI technology already in the pipeline, the time is right to make the technology available to the research communities. Our FD-SOI manufacturing process allows existing designs to be quickly and easily ported to FD-SOI where significant power and performance benefit can be realized,” said Philippe Magarshack, Executive Vice President, Design Enablement and Services, STMicroelectronics. “In addition, ensuring that universities have access to our leading-edge technologies can help us attract the best young engineers as part of our commitment to remain a technology leader on a long-term basis.”
“Our partnership with STMicroelectronics and CMP is an additional example of Soitec’s commitment to providing differentiated materials solutions to the open market, supporting the continual expansion of the FD-SOI ecosystem and users of advanced technologies,” said Steve Longoria, senior vice president of worldwide strategic business development for Soitec. “Through this partnership we will see new and innovative products based on Soitec's FD-SOI materials, as a result of providing universities and other customers with a proven path for developing and testing next-generation integrated circuits.”
Further Reading
Wednesday, October 17, 2012
#ALGORITHMS: "Cisco/Citrix Target Virtualization in the Clouds"
Virtualization and cloud computing are fitting hand-in-glove as enterprises migrate to using multiple different kinds of user platforms--smartphones, tablets, laptops, desktops--to access the same corporate information. Virtualization simplifies deployment and allows IT to manage devices remotely for higher efficiency and increased security, and cloud deployments make it affordable. Recognizing this trend, Citrix, which absorbed Virtual Computer earlier this year, just announced a deal with networking giant Cisco to meld virtualization software with cloud infrastructure, creating a seamless migration path to virtualization in the clouds for enterprises aiming to modernize: R. Colin Johnson
Here is what Citrix says about its new relationship with Cisco: Expanding business partnerships can be an exciting, even historic proposition. They’re a chance to forge new relationships, improve products and services, reinforce to customers what makes your business special and, ultimately, improve how people work. This is why we’re so excited about our extended partnership with Cisco, announced today at the Citrix Synergy conference in Barcelona.
We’ve been working successfully with Cisco for more than a year now, and in that time we have doubled our business together by deploying tens of thousands of new virtual desktops to companies across the globe. We recognize the significant transformation taking place in enterprise networks as companies support more mobile, anytime anywhere access to enterprise applications. As a result, Citrix and Cisco
are taking another leap forward in our partnership – extending into networking, cloud and mobility:
Mobile Workstyles: We’re teaming up to develop mobile workstyle and BYOD (bring your own device) solutions that give mobile users better ways to access business data and apps from any device, anywhere.
Cloud Orchestration: We’ll provide an integrated cloud solution that will help enterprise and service provider customers deliver better public, private and hybrid clouds.
Cloud Networking: We are combining Cisco’s leadership in the datacenter with strength in application delivery from Citrix to deliver a best-of-breed approach to designing next-generation, cloud-ready networks.
In phase one of our networking partnership, Cisco sales teams will now recommend Citrix NetScaler ADC for Cisco Unified Data Center Architecture and Solutions. This will enable our mutual customers to deliver any application or service with the best possible performance, security and availability. Additionally, Citrix is developing a suite of migration tools, reference documents and services to ensure seamless integration of Citrix NetScaler into Cisco Cloud Network Services architectures.
To fully support customers during this transition phase, Citrix is offering a new ACE Migration Program (AMP) to all global customers.
Further Reading
Here is what Citrix says about its new relationship with Cisco: Expanding business partnerships can be an exciting, even historic proposition. They’re a chance to forge new relationships, improve products and services, reinforce to customers what makes your business special and, ultimately, improve how people work. This is why we’re so excited about our extended partnership with Cisco, announced today at the Citrix Synergy conference in Barcelona.
We’ve been working successfully with Cisco for more than a year now, and in that time we have doubled our business together by deploying tens of thousands of new virtual desktops to companies across the globe. We recognize the significant transformation taking place in enterprise networks as companies support more mobile, anytime anywhere access to enterprise applications. As a result, Citrix and Cisco
are taking another leap forward in our partnership – extending into networking, cloud and mobility:
Mobile Workstyles: We’re teaming up to develop mobile workstyle and BYOD (bring your own device) solutions that give mobile users better ways to access business data and apps from any device, anywhere.
Cloud Orchestration: We’ll provide an integrated cloud solution that will help enterprise and service provider customers deliver better public, private and hybrid clouds.
Cloud Networking: We are combining Cisco’s leadership in the datacenter with strength in application delivery from Citrix to deliver a best-of-breed approach to designing next-generation, cloud-ready networks.
In phase one of our networking partnership, Cisco sales teams will now recommend Citrix NetScaler ADC for Cisco Unified Data Center Architecture and Solutions. This will enable our mutual customers to deliver any application or service with the best possible performance, security and availability. Additionally, Citrix is developing a suite of migration tools, reference documents and services to ensure seamless integration of Citrix NetScaler into Cisco Cloud Network Services architectures.
To fully support customers during this transition phase, Citrix is offering a new ACE Migration Program (AMP) to all global customers.
Further Reading
Tuesday, October 16, 2012
#ALGORITHMS: "Hyper-threading: Perfect for Neural Networks"
Today artificial neural networks (ANNs) are experiencing a resurgence, as a result of the success of high-profile applications that instill smarts into all sorts of apps, such as those using voice dictation, gesture navigation and knowledge representation. Luckily, the emerging legions of multi-core processors from Intel, AMD, Freescale and others support multiple threads, which is ideal for programming neural networks: R. Colin Johnson
Biological neurons (upper left) are emulated by artificial neural network (ANN) mapping concepts that sum inputs (upper right) then supply an output (bottom) filtered by an activation function. Source: Intel
Here is what Go-Parallel.com says about ANNs: Artificial neural networks (ANNs) are used today to learn solutions to parallel processing problems that have proved impossible to solve using conventional algorithms. From cloud-based, voice-driven apps like Apple’s Siri to realtime knowledge mining apps like IBM’s Watson to gaming apps like Electronic Arts’ SimCity, ANNs are powering voice-recognition, pattern-classification and function-optimization algorithms perfect for acceleration with Intel hyper-threading technology.
Further Reading
Biological neurons (upper left) are emulated by artificial neural network (ANN) mapping concepts that sum inputs (upper right) then supply an output (bottom) filtered by an activation function. Source: Intel
Here is what Go-Parallel.com says about ANNs: Artificial neural networks (ANNs) are used today to learn solutions to parallel processing problems that have proved impossible to solve using conventional algorithms. From cloud-based, voice-driven apps like Apple’s Siri to realtime knowledge mining apps like IBM’s Watson to gaming apps like Electronic Arts’ SimCity, ANNs are powering voice-recognition, pattern-classification and function-optimization algorithms perfect for acceleration with Intel hyper-threading technology.
Further Reading
Monday, October 15, 2012
#NETWORKING: "Massive Migration to Cloud in 2012"
Cloud computing has been a major enterprise draw over the last several years, but in 2012 the ordinary consumer got into the act with a massive migration to the clouds. Apple, Microsoft, Google and Amazon have coaxed nearly half a billion consumers to begin entrusting their data to cloud servers, potentially creating a cloud service market with billions of subscribers: R. Colin Johnson
Here is what IHS says about the massive migration to the clouds: The consumer cloud performed strongly in the first half of 2012, with the number of personal subscriptions to online storage services at the end of June already at 75 percent of the market’s projected sum for the year, according to insights from the IHS iSuppli Mobile & Wireless Communications Service from information and analytics provider IHS (NYSE: IHS).
The number of global consumers using cloud services after the first six months hit more than 375 million, or about three-quarters of the estimated total of 500 million by year-end. While no firm numbers exist to show the extent of the cloud in 2011 because it was relatively new and untested, best estimates put global subscribers then at approximately 150 million. Subscriptions to either free or paid cloud services will continue to climb in the years ahead, jumping to an estimated 625 million next year, and then doubling over the course of four years to reach 1.3 billion by 2017, as shown in the figure below.
“The cloud is a game changer in an age of near-ubiquitous mobile broadband, offering benefits to consumers and cloud service providers alike,” said Jagdish Rebello, Ph.D., director for consumer and communications at IHS. For consumers, cloud services are intended to manage and store user-generated data or purchased content, such as music, ebooks, pictures or videos. The content can then be seamlessly accessed and synced across devices like smartphones, media tablets and PCs. Meanwhile, technology companies are looking at the cloud as a way to generate revenue.”
Technology giants like Apple, Microsoft, Google and Amazon are using their own cloud offerings to sell hardware, content and other cloud storage services. Such services are often provided at the same cost—or below the cost—of equivalent offerings from pure-play cloud storage providers like Dropbox, Mozy, Carbonite and SugarSync.
To compete with the big players, pure-play cloud providers are adopting a freemium model in which they throw in 2 to 5 Gigabytes of cloud storage for free, and then offer tiered pricing plans for higher levels of storage. In many cases, these service providers limit the size of files that can be stored on their storage service.
The business of providing cloud storage can be costly, however. The cloud industry will continue to lose money from pure cloud offerings, IHS believes, and independent providers will find it extremely difficult to remain financially viable. This, in turn, provides mobile network operators with an attractive opportunity to partner with the pure-play providers and to offer differentiated services.
In addition to generating revenue opportunities, cloud services can create stickiness and reduce churn among the customers of mobile operators. Users with large amounts of data stored on an operator’s cloud service are likely to be reluctant to migrate their content to another operator’s cloud service at the end of a contract period because of the hassle involved, so the cloud can be effectively leveraged as a tool to retain customer loyalty.
All told, the winners in the increasingly tight race among mobile providers to entice consumers to their cloud will be those that can offer a personal service supporting diverse mobile devices and computers on their network, with huge revenue growth potentially at stake.
Further Reading
Here is what IHS says about the massive migration to the clouds: The consumer cloud performed strongly in the first half of 2012, with the number of personal subscriptions to online storage services at the end of June already at 75 percent of the market’s projected sum for the year, according to insights from the IHS iSuppli Mobile & Wireless Communications Service from information and analytics provider IHS (NYSE: IHS).
The number of global consumers using cloud services after the first six months hit more than 375 million, or about three-quarters of the estimated total of 500 million by year-end. While no firm numbers exist to show the extent of the cloud in 2011 because it was relatively new and untested, best estimates put global subscribers then at approximately 150 million. Subscriptions to either free or paid cloud services will continue to climb in the years ahead, jumping to an estimated 625 million next year, and then doubling over the course of four years to reach 1.3 billion by 2017, as shown in the figure below.
“The cloud is a game changer in an age of near-ubiquitous mobile broadband, offering benefits to consumers and cloud service providers alike,” said Jagdish Rebello, Ph.D., director for consumer and communications at IHS. For consumers, cloud services are intended to manage and store user-generated data or purchased content, such as music, ebooks, pictures or videos. The content can then be seamlessly accessed and synced across devices like smartphones, media tablets and PCs. Meanwhile, technology companies are looking at the cloud as a way to generate revenue.”
Technology giants like Apple, Microsoft, Google and Amazon are using their own cloud offerings to sell hardware, content and other cloud storage services. Such services are often provided at the same cost—or below the cost—of equivalent offerings from pure-play cloud storage providers like Dropbox, Mozy, Carbonite and SugarSync.
To compete with the big players, pure-play cloud providers are adopting a freemium model in which they throw in 2 to 5 Gigabytes of cloud storage for free, and then offer tiered pricing plans for higher levels of storage. In many cases, these service providers limit the size of files that can be stored on their storage service.
The business of providing cloud storage can be costly, however. The cloud industry will continue to lose money from pure cloud offerings, IHS believes, and independent providers will find it extremely difficult to remain financially viable. This, in turn, provides mobile network operators with an attractive opportunity to partner with the pure-play providers and to offer differentiated services.
In addition to generating revenue opportunities, cloud services can create stickiness and reduce churn among the customers of mobile operators. Users with large amounts of data stored on an operator’s cloud service are likely to be reluctant to migrate their content to another operator’s cloud service at the end of a contract period because of the hassle involved, so the cloud can be effectively leveraged as a tool to retain customer loyalty.
All told, the winners in the increasingly tight race among mobile providers to entice consumers to their cloud will be those that can offer a personal service supporting diverse mobile devices and computers on their network, with huge revenue growth potentially at stake.
Further Reading
Friday, October 12, 2012
#NETWORKING: "CBeyond Touts Full-Service Cloud"
Enterprises already committed to Microsoft's Hyper-V virtualization platform would do well to check out a new full-service cloud data center offered by CBeyond, which enables small to medium businesses to migrate without owning or managing their own servers: R. Colin Johnson
Here is what CBeyond says about its TotalCloud Data Center service: Cbeyond Inc. (NASDAQ: CBEY), the technology ally to more than 60,000 small and mid-sized businesses, today announced the launch of its new TotalCloud™ Data Center managed service. With TotalCloud Data Center, small and medium-sized businesses can now access secure, enterprise-class, customizable cloud services without having to purchase, configure, install and manage servers.
Cbeyond’s TotalCloud Data Center service is built on Microsoft Corp.’s Windows Server 2012 Hyper-V platform. It combines enterprise-class networking, storage and security to handle demanding, real-time business application workloads. Available in both public and private formats based on security needs, the service includes managed and monitored security features, backup services and round-the-clock infrastructure monitoring and support.
TotalCloud Data Center enables businesses to adjust storage, security and processing power individually to reflect changing business and operating requirements. Available features include load balancing, private firewalls and dedicated VLANs.
"With our TotalCloud Data Center service, businesses of any size can finally ditch their computer room and customize their own ideal cloud environment for business applications and processes," said Chris Gatch, chief technology officer, Cbeyond.
An early adopter of Microsoft’s Technology Adoption Program (TAP), Cbeyond was one of the first cloud services providers to include customers in beta trials of the latest Windows Server 2012 operating system in March 2012.
"Windows Server 2012 was built from the cloud up, based on Microsoft’s history of running large cloud datacenters and products," said Ian Carlson, Director, Product Marketing, Windows Server, Microsoft. "Groundbreaking storage capabilities, advanced Hyper-V virtualization functionality (including network virtualization), and multi-server management are just a few of Windows Server 2012’s features that offer customers of Cbeyond’s TotalCloud Data Center service a hosted cloud environment that lets them focus on their business instead of technology."
Further Reading
Here is what CBeyond says about its TotalCloud Data Center service: Cbeyond Inc. (NASDAQ: CBEY), the technology ally to more than 60,000 small and mid-sized businesses, today announced the launch of its new TotalCloud™ Data Center managed service. With TotalCloud Data Center, small and medium-sized businesses can now access secure, enterprise-class, customizable cloud services without having to purchase, configure, install and manage servers.
Cbeyond’s TotalCloud Data Center service is built on Microsoft Corp.’s Windows Server 2012 Hyper-V platform. It combines enterprise-class networking, storage and security to handle demanding, real-time business application workloads. Available in both public and private formats based on security needs, the service includes managed and monitored security features, backup services and round-the-clock infrastructure monitoring and support.
TotalCloud Data Center enables businesses to adjust storage, security and processing power individually to reflect changing business and operating requirements. Available features include load balancing, private firewalls and dedicated VLANs.
"With our TotalCloud Data Center service, businesses of any size can finally ditch their computer room and customize their own ideal cloud environment for business applications and processes," said Chris Gatch, chief technology officer, Cbeyond.
An early adopter of Microsoft’s Technology Adoption Program (TAP), Cbeyond was one of the first cloud services providers to include customers in beta trials of the latest Windows Server 2012 operating system in March 2012.
"Windows Server 2012 was built from the cloud up, based on Microsoft’s history of running large cloud datacenters and products," said Ian Carlson, Director, Product Marketing, Windows Server, Microsoft. "Groundbreaking storage capabilities, advanced Hyper-V virtualization functionality (including network virtualization), and multi-server management are just a few of Windows Server 2012’s features that offer customers of Cbeyond’s TotalCloud Data Center service a hosted cloud environment that lets them focus on their business instead of technology."
Further Reading
Thursday, October 11, 2012
#MARKETS: "Wearable Electronics on the Rise"
Wearable electronics arguably began with the iPod, but is expanding into eye-wear, clothing, sports equipment and all sorts of smart sensor based electronics from Nike's Fuel wristband to BlackBox Biometrics Blast Guage: R. Colin Johnson
Here is what IHS says about wearable technology: Encompassing such varied products as augmented-reality eyeglasses, cocktail dresses that light up when a cell phone rings and sports bras that monitor heart rates, the wearable technology market is on the fast track with growth, with shipments likely to rise by more than 500 percent from 2011 to 2016.
In 2011, 14 million wearable technology devices were estimated to have been shipped, according to a new report from IMS Research, recently acquired by IHS (NYSE: IHS).However, by 2016, shipments will increase to 92.5 million units, based on the most likely forecast scenario from IMS Research.
“Wearable technologies provide a range of benefits to users, from informing and entertaining, to monitoring health, to improving fitness, to enhancing military and industry applications,” said Theo Ahadome, senior analyst for medical research at IMS Research. “Because of all these uses, IMS Research foresees major potential for growth in all kinds of wearable technology products.”
If the technology fits, wear it
Wearable technology includes products that are worn on an individual’s body for extended periods of time, significantly enhancing the user experience via features including advanced circuitry, wireless connectivity and independent-processing capability.
Wearable technology fits into four different categories: fitness and wellness, healthcare and medical, industrial and military, and infotainment.
Fitness and wellness wearable technology products are used to monitor activity and emotions, while healthcare and medical devices monitor vital signs and augment senses. Industrial and military wearable technology receives and transmits real-time data in military or industrial environments. Infotainment technology is used to receive and transmit real-time information for entertainment and enhanced-lifestyle purposes.
With the wearable technology segment so broad in terms of products and applications, IHS has developed three scenarios for growth in the coming years: a pessimistic low-end outlook, a most likely midrange forecast and an optimistic, high-end prediction.
The low-end forecast calls for shipments to rise to only 39.2 million in 2016, and presumes wearable technology market growth will be limited by factors such as a lack of product availability, poor user adoption and deficient overall experience. Still, even with these challenges, shipped devices could grow by nearly threefold between 2011 and 2016.
The midrange, most likely scenario described at the opening of this release presumes there will be some market constraints including the lack of reimbursement in medical applications and a paucity in product introductions by major suppliers. However, these will be offset by the improved functionality of non-wearable devices—which helps explain the bigger numbers over the low-end forecast.
The optimistic scenario is one where significant progress and success has been achieved in wearable technology, including the introduction of new products and widespread availability from major brands. In this scenario, 171 million devices will ship in 2016—a whopping twelvefold expansion from last year.
Notwithstanding the growth figures, these various scenarios reflect continuing uncertainty over the long-term future of wearable technology and the varying factors that affect future outcomes.
The highest revenue-generating areas last year for wearable products were in two segments—healthcare and medical on the one hand, and fitness and wellness on the other.
In both these segments, continuous-glucose monitors were the highest-grossing device for revenue.
The need for continuous data on blood glucose levels, particularly Type I diabetes patients, has become critical in the treatment of the disease, providing impetus for monitor devices. Medtronic, Abbott and C8 MediSensor are the companies playing heavily in this field.
In the low-end forecast, both segments are expected to continue to account for the highest share of revenue until 2016. In the midrange forecast, infotainment will overtake fitness and wellness to become the second-largest application area in terms of revenue, driven by robust growth in the area of smart watches.
Healthcare and medical will continue to be the largest application area in both low-end and midrange forecasts.
In the high-end forecast, infotainment is projected to account for the largest revenue share of 38 percent by 2016, driven by the uptake of smart watches and smart glasses.
The United States is the leading region for wearable devices at present. This won’t change anytime soon, as IHS forecasts the U.S. will continue to be the largest geographic region for wearable technology through 2016. Europe, meanwhile, is growing its share of revenue for wearable devices and will be the second-largest region by 2016, most notably in the healthcare and medical application area. This is because healthcare providers there are expected to respond to the successful cases recorded in the U.S.
For the rest of world, Japan is expected to constitute the major market, particularly in the infotainment area.
Further Reading
Here is what IHS says about wearable technology: Encompassing such varied products as augmented-reality eyeglasses, cocktail dresses that light up when a cell phone rings and sports bras that monitor heart rates, the wearable technology market is on the fast track with growth, with shipments likely to rise by more than 500 percent from 2011 to 2016.
In 2011, 14 million wearable technology devices were estimated to have been shipped, according to a new report from IMS Research, recently acquired by IHS (NYSE: IHS).However, by 2016, shipments will increase to 92.5 million units, based on the most likely forecast scenario from IMS Research.
“Wearable technologies provide a range of benefits to users, from informing and entertaining, to monitoring health, to improving fitness, to enhancing military and industry applications,” said Theo Ahadome, senior analyst for medical research at IMS Research. “Because of all these uses, IMS Research foresees major potential for growth in all kinds of wearable technology products.”
If the technology fits, wear it
Wearable technology includes products that are worn on an individual’s body for extended periods of time, significantly enhancing the user experience via features including advanced circuitry, wireless connectivity and independent-processing capability.
Wearable technology fits into four different categories: fitness and wellness, healthcare and medical, industrial and military, and infotainment.
Fitness and wellness wearable technology products are used to monitor activity and emotions, while healthcare and medical devices monitor vital signs and augment senses. Industrial and military wearable technology receives and transmits real-time data in military or industrial environments. Infotainment technology is used to receive and transmit real-time information for entertainment and enhanced-lifestyle purposes.
With the wearable technology segment so broad in terms of products and applications, IHS has developed three scenarios for growth in the coming years: a pessimistic low-end outlook, a most likely midrange forecast and an optimistic, high-end prediction.
The low-end forecast calls for shipments to rise to only 39.2 million in 2016, and presumes wearable technology market growth will be limited by factors such as a lack of product availability, poor user adoption and deficient overall experience. Still, even with these challenges, shipped devices could grow by nearly threefold between 2011 and 2016.
The midrange, most likely scenario described at the opening of this release presumes there will be some market constraints including the lack of reimbursement in medical applications and a paucity in product introductions by major suppliers. However, these will be offset by the improved functionality of non-wearable devices—which helps explain the bigger numbers over the low-end forecast.
The optimistic scenario is one where significant progress and success has been achieved in wearable technology, including the introduction of new products and widespread availability from major brands. In this scenario, 171 million devices will ship in 2016—a whopping twelvefold expansion from last year.
Notwithstanding the growth figures, these various scenarios reflect continuing uncertainty over the long-term future of wearable technology and the varying factors that affect future outcomes.
The highest revenue-generating areas last year for wearable products were in two segments—healthcare and medical on the one hand, and fitness and wellness on the other.
In both these segments, continuous-glucose monitors were the highest-grossing device for revenue.
The need for continuous data on blood glucose levels, particularly Type I diabetes patients, has become critical in the treatment of the disease, providing impetus for monitor devices. Medtronic, Abbott and C8 MediSensor are the companies playing heavily in this field.
In the low-end forecast, both segments are expected to continue to account for the highest share of revenue until 2016. In the midrange forecast, infotainment will overtake fitness and wellness to become the second-largest application area in terms of revenue, driven by robust growth in the area of smart watches.
Healthcare and medical will continue to be the largest application area in both low-end and midrange forecasts.
In the high-end forecast, infotainment is projected to account for the largest revenue share of 38 percent by 2016, driven by the uptake of smart watches and smart glasses.
The United States is the leading region for wearable devices at present. This won’t change anytime soon, as IHS forecasts the U.S. will continue to be the largest geographic region for wearable technology through 2016. Europe, meanwhile, is growing its share of revenue for wearable devices and will be the second-largest region by 2016, most notably in the healthcare and medical application area. This is because healthcare providers there are expected to respond to the successful cases recorded in the U.S.
For the rest of world, Japan is expected to constitute the major market, particularly in the infotainment area.
Further Reading
Wednesday, October 10, 2012
#MARKETS: "PCs in Decline Spelling End for an Era"
Personal computers (PCs) were once touted by Byte Magazine to be an endless market with no bottom in sight--until now. The PC market began its long decline in 2012 as users switch to smartphones and tablets as their primary connectivity device: R. Colin Johnson
Here is what IHS says about the declining PC market and the effect it will have on Intel: After entering the year with high hopes, the global PC market has seen its prospects dim, with worldwide shipments set to decline in 2012 for the first time in 11 years, according to the IHS iSuppli Compute Platforms Service at information and analytics provider IHS (NYSE: IHS).
The total PC market in 2012 is expected to contract by 1.2 percent to 348.7 million units, down from 352.8 million in 2011, as shown in the figure attached. Not since 2001—more than a decade ago—has the worldwide PC industry suffered such a decline.
“There was great hope through the first half that 2012 would prove to be a rebound year for the PC market,” said Craig Stice, senior principal analyst for computer systems at IHS. “Now three quarters through the year, the usual boost from the back-to-school season appears to be a bust, and both AMD and Intel’s third-quarter outlooks appear to be flat to down. Optimism has vanished and turned to doubt, and the industry is now training its sights on 2013 to deliver the hoped-for rebound. All this is setting the PC market up for its first annual decline since the dot-com bust year of 2001.”
The year started off with major hope for Intel’s ultrabooks at the annual Consumer Electronic Show (CES) in Las Vegas. New and innovative form factors like convertibles, combined with the first appearance of Windows 8 demos on display, provided a fresh wave of enthusiasm for the possibility of a revitalized PC market. Even when first-quarter PC shipments came in, the less-than-stellar results were thought to be a minor setback.
The high expectations continued midyear during the big PC event at Computex in Taiwan, as Intel plugged its latest Ivy Bridge processor. Shipments during the second quarter, however, once again disappointed.
For now, important questions remain for the PC market and the rest of the year:
· How much impact will Windows 8 really have toward boosting the PC market in the fourth quarter?
· Will continuing global economic concerns neutralize whatever hype or interest has been generated by ultrabooks?
· Will mobile computing gadgets such as tablets and smartphones win over PCs during the crucial holiday selling season, taking precious consumer dollars and keeping PC sales at bay?
There are signs that a strong rebound could still occur in 2013. While IHS has reduced its forecast for them, the new ultrabooks and other ultrathin notebook computers remain viable products with the potential to redraw the PC landscape, and the addition of Windows 8 to the mix could prove potent and irresistible to consumers. Whether a newly configured PC space could then stand up to the powerful smartphone and tablet markets, however, remains to be seen.
Further Reading
Here is what IHS says about the declining PC market and the effect it will have on Intel: After entering the year with high hopes, the global PC market has seen its prospects dim, with worldwide shipments set to decline in 2012 for the first time in 11 years, according to the IHS iSuppli Compute Platforms Service at information and analytics provider IHS (NYSE: IHS).
The total PC market in 2012 is expected to contract by 1.2 percent to 348.7 million units, down from 352.8 million in 2011, as shown in the figure attached. Not since 2001—more than a decade ago—has the worldwide PC industry suffered such a decline.
“There was great hope through the first half that 2012 would prove to be a rebound year for the PC market,” said Craig Stice, senior principal analyst for computer systems at IHS. “Now three quarters through the year, the usual boost from the back-to-school season appears to be a bust, and both AMD and Intel’s third-quarter outlooks appear to be flat to down. Optimism has vanished and turned to doubt, and the industry is now training its sights on 2013 to deliver the hoped-for rebound. All this is setting the PC market up for its first annual decline since the dot-com bust year of 2001.”
The year started off with major hope for Intel’s ultrabooks at the annual Consumer Electronic Show (CES) in Las Vegas. New and innovative form factors like convertibles, combined with the first appearance of Windows 8 demos on display, provided a fresh wave of enthusiasm for the possibility of a revitalized PC market. Even when first-quarter PC shipments came in, the less-than-stellar results were thought to be a minor setback.
The high expectations continued midyear during the big PC event at Computex in Taiwan, as Intel plugged its latest Ivy Bridge processor. Shipments during the second quarter, however, once again disappointed.
For now, important questions remain for the PC market and the rest of the year:
· How much impact will Windows 8 really have toward boosting the PC market in the fourth quarter?
· Will continuing global economic concerns neutralize whatever hype or interest has been generated by ultrabooks?
· Will mobile computing gadgets such as tablets and smartphones win over PCs during the crucial holiday selling season, taking precious consumer dollars and keeping PC sales at bay?
There are signs that a strong rebound could still occur in 2013. While IHS has reduced its forecast for them, the new ultrabooks and other ultrathin notebook computers remain viable products with the potential to redraw the PC landscape, and the addition of Windows 8 to the mix could prove potent and irresistible to consumers. Whether a newly configured PC space could then stand up to the powerful smartphone and tablet markets, however, remains to be seen.
Further Reading
Tuesday, October 09, 2012
#CHIPS: "Freescale Microcontrollers Target Smart Meters"
Smart meters are being installed worldwide in order to manage electricity consumption more wisely, prompting Freescale to create a new family of microcontrollers especially designed for smart meters: R. Colin Johnson
Here is what EETimes says about smart meters: Freescale Semiconductor Inc. has unveiled a line of microcontrollers that harness the ultra-low-power 32-bit ARM Cortex-M0+ processor for smart meter applications.
Freescale (Austin, Texas) said the Kinetis M series was specifically designed for low cost, one- and two-phase smart electrical meters. The Kinetis KW01 includes support for sub-GHz wirelessly networking for smart energy designs...The M series operates at 120 microAmp/MHz in run mode while the KW01 is said to reduce run-time current to 40 microAmp/MHz...
Further Reading
Here is what EETimes says about smart meters: Freescale Semiconductor Inc. has unveiled a line of microcontrollers that harness the ultra-low-power 32-bit ARM Cortex-M0+ processor for smart meter applications.
Freescale (Austin, Texas) said the Kinetis M series was specifically designed for low cost, one- and two-phase smart electrical meters. The Kinetis KW01 includes support for sub-GHz wirelessly networking for smart energy designs...The M series operates at 120 microAmp/MHz in run mode while the KW01 is said to reduce run-time current to 40 microAmp/MHz...
Further Reading
Monday, October 08, 2012
#CHIPS: "Supercomputer Apps Favor MIC over GPU"
Supercomputers used to have massive central-processing units (CPUs) that were lightening quick, but have given way to networked multi-core processors with turned-down clock speeds to save power. Graphic-processing units (GPUs) fit right into this model by virtue of their simple architecture and massive parallelism, but now Intel's many-integrated core (MIC) architecture, used for the first time in the Xeon Phi, promises to do everything that GPUs do, but with x86 software comparibility, prompting a scientist at the National Center for Supercomputing Applications to favor MICs over GPUs: R. Colin Johnson
NCSA’s Lincoln Cluster used Intel Xeon main processors and Nividia Tesla graphic processing units (GPUs).
Here is what Go-Parallel says about MIC versus GPU: While the massively parallel Xeon Phi coprocessor faces supercomputers leveraging Nvidia’s graphic processing units (GPUs), Intel’s many-integrated-core (MIC) architecture will prevail, according to the Senior Research Scientist at the National Center for Supercomputing Applications (NCSA) Innovative Systems Laboratory at the University of Illinois, Urbana-Champaign
In a presentation at the Fifth International Workshop on Parallel Programming Models and Systems Software for High-End Computingn (P2S2) held in Pittsburgh Sept. 10-12, Volodymyr Kindratenko asserted that GPU accelerators will eventually lose out to Intel’s MIC because its architecture only requires a fine-tuning of parallel x86 code already running on supercomputers today...
Further Reading
NCSA’s Lincoln Cluster used Intel Xeon main processors and Nividia Tesla graphic processing units (GPUs).
Here is what Go-Parallel says about MIC versus GPU: While the massively parallel Xeon Phi coprocessor faces supercomputers leveraging Nvidia’s graphic processing units (GPUs), Intel’s many-integrated-core (MIC) architecture will prevail, according to the Senior Research Scientist at the National Center for Supercomputing Applications (NCSA) Innovative Systems Laboratory at the University of Illinois, Urbana-Champaign
In a presentation at the Fifth International Workshop on Parallel Programming Models and Systems Software for High-End Computingn (P2S2) held in Pittsburgh Sept. 10-12, Volodymyr Kindratenko asserted that GPU accelerators will eventually lose out to Intel’s MIC because its architecture only requires a fine-tuning of parallel x86 code already running on supercomputers today...
Further Reading
Saturday, October 06, 2012
#MEMS: "Omron Expands Vertical Integration to Gaming"
Omron told me 20 years ago that it was vertically integrating its MEMS-chip business, so that it could not only make the sensors used for blood pressure monitors, but also all the other subsystems need to make consumer blood-pressure cuffs. I told them they were spreading their expertise too thin, but they proved me wrong by becoming the world leader in blood pressure cuffs. Omron is now expanding that vertical integration philosophy into smart meters--in cooperation with ST Microelectronics--and into consumer gaming all on its own. Gaming will be a tougher sell, since it will be up against established titans--Sony, Nintendo and Microsoft. Ordinarily I would say they were crazy again, as I did when they entered the blood-pressure cuff market, but as I was wrong before, this time I will reserve judgement and just say that Omron will have the advantage of making all the components--including MEMS sensors--inside its gaming consoles: R. Colin Johnson
Here is what ST says about is development efforts with Omron:Omron Corp. (Kyoto, Japan) and STMicroelectronics (Geneva, Switzerland) announced the completion of the development of a MEMS-based gas flow sensor with industry-unique built-in correction for differences in gas composition. OMRON will start sample shipments of the new sensor in November 2012.
As with electricity-consumption measuring, gas metering is moving from conventional mechanical solutions to smart electronic meters with automatic meter-reading functions. There are over 400 million mechanical gas meters in the world and most major gas providers are readying to replace their traditional meters with more accurate and reliable electronic devices.
In addition to higher precision and reliability, the OMRON/ST sensor solution enables the development of smart gas meters that are much smaller, less expensive, and more power-efficient than the conventional equipment, resulting in substantial cost savings for the utility companies and end users alike. Industry analysts expect the global smart gas meter market to exceed 10 million units a year by 2015.[1]
The new gas-flow sensor combines OMRON’s state-of-the-art MEMS (Micro-Electro-Mechanical System) thermal flow transducer with ST’s high-performance analog front-end IC, delivering high-precision gas flow-rate measurement with excellent reproducibility. Gas meters built around the OMRON/ST solution do not need to be configured for a certain type of gas at the time of shipment or installation, as they are intrinsically compensated for both temperature and pressure variations and a built-in circuit compensates for the variation of multiple gas composition. The sensor is dust-resistant to comply with international gas-meter standards.
“The successful collaboration with OMRON in gas metering expands ST’s foothold in the increasingly important field of ‘intelligent measurement’ and sets us to replicate the great success we have achieved in smart electricity metering,” said Marco Cassis, Executive Vice President and President, Japan and Korea Region, STMicroelectronics.
"We are very much excited to introduce a new powerful one-stop solution that enables a simple and very accurate Smart Gas Meter System for global markets through the successful collaboration with STMicroelectronics. By enabling IT-based smart metering, this new technology will significantly contribute to energy saving," said Yoshio Sekiguchi, Senior General Manager of the Micro Devices Division of OMRON Corporation.
Here is what Omron says about entering the gaming market: Omron Electronic Components LLC, the Americas subsidiary of OMRON Corporation (HQ: Kyoto, Japan) has announced Gaming as one of their official vertical market focuses, marking an exciting growth for both the company and the industry. This will be the fifth formal vertical market team for the company, complemented by Medical, Building Automation, Transportation and Test & Measurement. This strategic expansion brings OMRON’s long history, proven quality, and leadership role in the Amusement industry from Japan to the Americas; it also puts in place a local Sales team intensely dedicated to the support and growth of the Americas Gaming industry. In addition, Mr. Nate Takahashi, from Omron’s Amusement Division in Nagoya, Japan, has joined the team as a Field Application Engineer to support Sales and Marketing in the Americas for the next three years; he is based in Pleasanton, California.
Further Reading
Here is what ST says about is development efforts with Omron:Omron Corp. (Kyoto, Japan) and STMicroelectronics (Geneva, Switzerland) announced the completion of the development of a MEMS-based gas flow sensor with industry-unique built-in correction for differences in gas composition. OMRON will start sample shipments of the new sensor in November 2012.
As with electricity-consumption measuring, gas metering is moving from conventional mechanical solutions to smart electronic meters with automatic meter-reading functions. There are over 400 million mechanical gas meters in the world and most major gas providers are readying to replace their traditional meters with more accurate and reliable electronic devices.
In addition to higher precision and reliability, the OMRON/ST sensor solution enables the development of smart gas meters that are much smaller, less expensive, and more power-efficient than the conventional equipment, resulting in substantial cost savings for the utility companies and end users alike. Industry analysts expect the global smart gas meter market to exceed 10 million units a year by 2015.[1]
The new gas-flow sensor combines OMRON’s state-of-the-art MEMS (Micro-Electro-Mechanical System) thermal flow transducer with ST’s high-performance analog front-end IC, delivering high-precision gas flow-rate measurement with excellent reproducibility. Gas meters built around the OMRON/ST solution do not need to be configured for a certain type of gas at the time of shipment or installation, as they are intrinsically compensated for both temperature and pressure variations and a built-in circuit compensates for the variation of multiple gas composition. The sensor is dust-resistant to comply with international gas-meter standards.
“The successful collaboration with OMRON in gas metering expands ST’s foothold in the increasingly important field of ‘intelligent measurement’ and sets us to replicate the great success we have achieved in smart electricity metering,” said Marco Cassis, Executive Vice President and President, Japan and Korea Region, STMicroelectronics.
"We are very much excited to introduce a new powerful one-stop solution that enables a simple and very accurate Smart Gas Meter System for global markets through the successful collaboration with STMicroelectronics. By enabling IT-based smart metering, this new technology will significantly contribute to energy saving," said Yoshio Sekiguchi, Senior General Manager of the Micro Devices Division of OMRON Corporation.
Here is what Omron says about entering the gaming market: Omron Electronic Components LLC, the Americas subsidiary of OMRON Corporation (HQ: Kyoto, Japan) has announced Gaming as one of their official vertical market focuses, marking an exciting growth for both the company and the industry. This will be the fifth formal vertical market team for the company, complemented by Medical, Building Automation, Transportation and Test & Measurement. This strategic expansion brings OMRON’s long history, proven quality, and leadership role in the Amusement industry from Japan to the Americas; it also puts in place a local Sales team intensely dedicated to the support and growth of the Americas Gaming industry. In addition, Mr. Nate Takahashi, from Omron’s Amusement Division in Nagoya, Japan, has joined the team as a Field Application Engineer to support Sales and Marketing in the Americas for the next three years; he is based in Pleasanton, California.
Further Reading
#CHIPS: "Haswell Ups Parallelism Ante"
The semiconductor roadmap has hit a dead-end as far as scaling is concerned, with new microprocessors actually scaling back their processor speeds--to lower power consumption--instead of increasing them as has been the trends since the 1980s. Intel's new Haswell micro-architecture is the perfect example, since it increases performance while cutting power by adding support for parallel execution instead of just cranking up the clock: R. Colin Johnson
Executive vice president and chief product officer David Perlmutter provides updates about Haswell at the recent Intel Developers Forum.
Here is what Go-Parallel says about the Haswell micro-architecture to debut in 2013: Intel has revealed new architectural details about the Haswell micro-architecture and its support for parallel processing. The faster, lower power Haswell will drastically cut power across the board, offer almost double the graphics processing speed and will include new advanced vector extension (AVX) instructions and other parallel enhancements, the company says. Two- and four-core Haswell processors for PCs, tablets and Ultrabooks will be available early in 2013, with workstation and server models to come later.
Further Reading
Executive vice president and chief product officer David Perlmutter provides updates about Haswell at the recent Intel Developers Forum.
Here is what Go-Parallel says about the Haswell micro-architecture to debut in 2013: Intel has revealed new architectural details about the Haswell micro-architecture and its support for parallel processing. The faster, lower power Haswell will drastically cut power across the board, offer almost double the graphics processing speed and will include new advanced vector extension (AVX) instructions and other parallel enhancements, the company says. Two- and four-core Haswell processors for PCs, tablets and Ultrabooks will be available early in 2013, with workstation and server models to come later.
Further Reading
Friday, October 05, 2012
#CHIPS: "How Xeon Phi Stacks Up to GPUs"
The graphic processing unit (GPU) makers like Nvidia and AMD have been courting supercomputer makers as the future of the massively parallel high-performance computer (HPC). Now, however, Intel has challenged that wisdom with a massively parallel x86 architecture called many-integrted core (MIC). Intel's presentation at Hot Chips last month details why they think MIC--and its first implementation, the 50+ core Xeon Phi--beats GPUs in accelerating the supercomputers of the future: R. Colin Johnson
Here is what Go-Parallel says about MIC versus GPU: Xeon Phi lead architect George Chrysos presented comparisons between using Xeon Phi co-processors instead of graphics-processor units (GPUs) at the recent Hot Chips conference. According to the Top500 Super Computer Sites ranking, Intel’s many-integrated core (MIC) architecture not only outperformed the two top GPU-based supercomputers on the most recent Top500 list, but was also “greener” by virtue of providing more performance-per-Watt.
Further Reading
Here is what Go-Parallel says about MIC versus GPU: Xeon Phi lead architect George Chrysos presented comparisons between using Xeon Phi co-processors instead of graphics-processor units (GPUs) at the recent Hot Chips conference. According to the Top500 Super Computer Sites ranking, Intel’s many-integrated core (MIC) architecture not only outperformed the two top GPU-based supercomputers on the most recent Top500 list, but was also “greener” by virtue of providing more performance-per-Watt.
Further Reading
Thursday, October 04, 2012
#TABLET: "Windows Tablet Debuts Soon"
Surface was listed recently on Microsoft's website, although no price or availability date was set. Nevertheless, those additions will only take a moment to add--a clear indicator that Surface's official release will be soon (click "Further Reading" below to see the new Surface website): R. Colin Johnson
In case you've been living under a rock, Surface is Microsoft's forthcoming Windows tablet and should not be confused with what MS previously called Surface--a table-top-sized 40-inch touchscreen made by Samsung which has been renamed Surface-40 (SUR40)
Surface, shown above with an optional keyboard built into its lid, will be released to enterprises first, according to Microsoft OEMs who will integrate it with Windows servers and desktops using virtualization. A standalone consumer version will likely come later, probably in 2013. The enterprise version is said to run Windows RT on an ARM-based processor from Nvidia while the consumer Surface is said to run Windows-8 on an Intel Ivy Bridge x86 processor.
Dell, HP, Lenovo and other original equipment manufacturers (OEMs) are already developing Android-based tablets to compete with Microsoft's Surface, but those won't be able to execute x86 code. Its also not clear whether both the ARM- and x86-versions of the new Windows OS will be made available to OEMs for their Windows tablets, since those would directly compete with Microsoft's Surface.
Further Reading
In case you've been living under a rock, Surface is Microsoft's forthcoming Windows tablet and should not be confused with what MS previously called Surface--a table-top-sized 40-inch touchscreen made by Samsung which has been renamed Surface-40 (SUR40)
Surface, shown above with an optional keyboard built into its lid, will be released to enterprises first, according to Microsoft OEMs who will integrate it with Windows servers and desktops using virtualization. A standalone consumer version will likely come later, probably in 2013. The enterprise version is said to run Windows RT on an ARM-based processor from Nvidia while the consumer Surface is said to run Windows-8 on an Intel Ivy Bridge x86 processor.
Dell, HP, Lenovo and other original equipment manufacturers (OEMs) are already developing Android-based tablets to compete with Microsoft's Surface, but those won't be able to execute x86 code. Its also not clear whether both the ARM- and x86-versions of the new Windows OS will be made available to OEMs for their Windows tablets, since those would directly compete with Microsoft's Surface.
Further Reading
Wednesday, October 03, 2012
#SECURITY: "MegaDroid Simulates Cyber Attackers Who Use Smartphones"
Smartphones and tablets running open-source operating systems like Google's Android are exceedingly vulnerable to cyber attacks, according to Sandia National Laboratories which has created a network of 300,000 virtual Androids. Called MegaDroid, the virtual network will enable the Labs to test new methods of defeating criminals, hackers and spies using Android devices to infiltrate secure systems: R. Colin Johnson
Sandia's David Fritz holds two Android smartphones, representing the virtual network of 300,000 such devices that he and other researchers are using to advance understanding of malicious computer networks on the Internet.
Here is what Sandia National Labs says about MegaDroid: As part of ongoing research to help prevent and mitigate disruptions to computer networks on the Internet, researchers at Sandia National Laboratories in California have turned their attention to smartphones and other hand-held computing devices.
Sandia cyber researchers linked together 300,000 virtual hand-held computing devices running the Android operating system so they can study large networks of smartphones and find ways to make them more reliable and secure. Android dominates the smartphone industry and runs on a range of computing gadgets.
The work is expected to result in a software tool that will allow others in the cyber research community to model similar environments and study the behaviors of smartphone networks. Ultimately, the tool will enable the computing industry to better protect hand-held devices from malicious intent.
The project builds on the success of earlier work in which Sandia focused on virtual Linux and Windows desktop systems.
“Smartphones are now ubiquitous and used as general-purpose computing devices as much as desktop or laptop computers,” said Sandia’s David Fritz. “But even though they are easy targets, no one appears to be studying them at the scale we’re attempting.”
The Android project, dubbed MegaDroid, is expected to help researchers at Sandia and elsewhere who struggle to understand large scale networks. Soon, Sandia expects to complete a sophisticated demonstration of the MegaDroid project that could be presented to potential industry or government collaborators.
The virtual Android network at Sandia, said computer scientist John Floren, is carefully insulated from other networks at the Labs and the outside world, but can be built up into a realistic computing environment. That environment might include a full domain name service (DNS), an Internet relay chat (IRC) server, a web server and multiple subnets.
A key element of the Android project, Floren said, is a “spoof” Global Positioning System (GPS). He and his colleagues created simulated GPS data of a smartphone user in an urban environment, an important experiment since smartphones and such key features as Bluetooth and Wi-Fi capabilities are highly location-dependent and thus could easily be controlled and manipulated by rogue actors.
The researchers then fed that data into the GPS input of an Android virtual machine. Software on the virtual machine treats the location data as indistinguishable from real GPS data, which offers researchers a much richer and more accurate emulation environment from which to analyze and study what hackers can do to smartphone networks, Floren said.
This latest development by Sandia cyber researchers represents a significant steppingstone for those hoping to understand and limit the damage from network disruptions due to glitches in software or protocols, natural disasters, acts of terrorism, or other causes. These disruptions can cause significant economic and other losses for individual consumers, companies and governments.
“You can’t defend against something you don’t understand,” Floren said. The larger the scale the better, he said, since more computer nodes offer more data for researchers to observe and study.
The research builds upon the Megatux project that started in 2009, in which Sandia scientists ran a million virtual Linux machines, and on a later project that focused on the Windows operating system, called MegaWin. Sandia researchers created those virtual networks at large scale using real Linux and Windows instances in virtual machines.
The main challenge in studying Android-based machines, the researchers say, is the sheer complexity of the software. Google, which developed the Android operating system, wrote some 14 million lines of code into the software, and the system runs on top of a Linux kernel, which more than doubles the amount of code.
“It’s possible for something to go wrong on the scale of a big wireless network because of a coding mistake in an operating system or an application, and it’s very hard to diagnose and fix,” said Fritz. “You can’t possibly read through 15 million lines of code and understand every possible interaction between all these devices and the network.”
Much of Sandia’s work on virtual computing environments will soon be available for other cyber researchers via open source. Floren and Fritz believe Sandia should continue to work on tools that industry leaders and developers can use to better diagnose and fix problems in computer networks.
“Tools are only useful if they’re used,” said Fritz.
MegaDroid primarily will be useful as a tool to ferret out problems that would manifest themselves when large numbers of smartphones interact, said Keith Vanderveen, manager of Sandia’s Scalable and Secure Systems Research department.
“You could also extend the technology to other platforms besides Android,” said Vanderveen. “Apple’s iOS, for instance, could take advantage of our body of knowledge and the toolkit we’re developing.” He said Sandia also plans to use MegaDroid to explore issues of data protection and data leakage, which he said concern government agencies such as the departments of Defense and Homeland Security.
Further Reading
Sandia's David Fritz holds two Android smartphones, representing the virtual network of 300,000 such devices that he and other researchers are using to advance understanding of malicious computer networks on the Internet.
Here is what Sandia National Labs says about MegaDroid: As part of ongoing research to help prevent and mitigate disruptions to computer networks on the Internet, researchers at Sandia National Laboratories in California have turned their attention to smartphones and other hand-held computing devices.
Sandia cyber researchers linked together 300,000 virtual hand-held computing devices running the Android operating system so they can study large networks of smartphones and find ways to make them more reliable and secure. Android dominates the smartphone industry and runs on a range of computing gadgets.
The work is expected to result in a software tool that will allow others in the cyber research community to model similar environments and study the behaviors of smartphone networks. Ultimately, the tool will enable the computing industry to better protect hand-held devices from malicious intent.
The project builds on the success of earlier work in which Sandia focused on virtual Linux and Windows desktop systems.
“Smartphones are now ubiquitous and used as general-purpose computing devices as much as desktop or laptop computers,” said Sandia’s David Fritz. “But even though they are easy targets, no one appears to be studying them at the scale we’re attempting.”
The Android project, dubbed MegaDroid, is expected to help researchers at Sandia and elsewhere who struggle to understand large scale networks. Soon, Sandia expects to complete a sophisticated demonstration of the MegaDroid project that could be presented to potential industry or government collaborators.
The virtual Android network at Sandia, said computer scientist John Floren, is carefully insulated from other networks at the Labs and the outside world, but can be built up into a realistic computing environment. That environment might include a full domain name service (DNS), an Internet relay chat (IRC) server, a web server and multiple subnets.
A key element of the Android project, Floren said, is a “spoof” Global Positioning System (GPS). He and his colleagues created simulated GPS data of a smartphone user in an urban environment, an important experiment since smartphones and such key features as Bluetooth and Wi-Fi capabilities are highly location-dependent and thus could easily be controlled and manipulated by rogue actors.
The researchers then fed that data into the GPS input of an Android virtual machine. Software on the virtual machine treats the location data as indistinguishable from real GPS data, which offers researchers a much richer and more accurate emulation environment from which to analyze and study what hackers can do to smartphone networks, Floren said.
This latest development by Sandia cyber researchers represents a significant steppingstone for those hoping to understand and limit the damage from network disruptions due to glitches in software or protocols, natural disasters, acts of terrorism, or other causes. These disruptions can cause significant economic and other losses for individual consumers, companies and governments.
“You can’t defend against something you don’t understand,” Floren said. The larger the scale the better, he said, since more computer nodes offer more data for researchers to observe and study.
The research builds upon the Megatux project that started in 2009, in which Sandia scientists ran a million virtual Linux machines, and on a later project that focused on the Windows operating system, called MegaWin. Sandia researchers created those virtual networks at large scale using real Linux and Windows instances in virtual machines.
The main challenge in studying Android-based machines, the researchers say, is the sheer complexity of the software. Google, which developed the Android operating system, wrote some 14 million lines of code into the software, and the system runs on top of a Linux kernel, which more than doubles the amount of code.
“It’s possible for something to go wrong on the scale of a big wireless network because of a coding mistake in an operating system or an application, and it’s very hard to diagnose and fix,” said Fritz. “You can’t possibly read through 15 million lines of code and understand every possible interaction between all these devices and the network.”
Much of Sandia’s work on virtual computing environments will soon be available for other cyber researchers via open source. Floren and Fritz believe Sandia should continue to work on tools that industry leaders and developers can use to better diagnose and fix problems in computer networks.
“Tools are only useful if they’re used,” said Fritz.
MegaDroid primarily will be useful as a tool to ferret out problems that would manifest themselves when large numbers of smartphones interact, said Keith Vanderveen, manager of Sandia’s Scalable and Secure Systems Research department.
“You could also extend the technology to other platforms besides Android,” said Vanderveen. “Apple’s iOS, for instance, could take advantage of our body of knowledge and the toolkit we’re developing.” He said Sandia also plans to use MegaDroid to explore issues of data protection and data leakage, which he said concern government agencies such as the departments of Defense and Homeland Security.
Further Reading
Tuesday, October 02, 2012
#SECURITY: "Turning cyber security on its head"
As malware becomes more sophisticated, its safer to only install and run known-trusted software apps instead of just scanning apps for viruses and assuming they are otherwise safe. By controlling the application environment, and only allowing certified trusted apps to run, even next-generation malware can the thwarted: R. Colin Johnson
Bit9's dashboard for information technology (IT) department tracks files, softrware, and "drift" from standard configurations plus provides a panic button (lower left) that locks down all connected systems to High Enforcement Level.
Here is what EETimes says about Bit9: As cyber security threats diversify, the most advanced solutions are upending the detection paradigm—from removing malicious software to installing only trusted software in the first place. Once considered too cumbersome for everyday use by IT departments, trust-based security—called application control—is now ready for mainstream IT departments, cloud deployments and virtualized environments, according to security software provider Bit9 Inc...
Bit9's database of known good apps can be access with its Parity Knowledge Service which evaluates whether software is trustworthy, here rejecting a file of unknown origin.
Here is what Bit9 says: Bit9, the global leader in Advanced Threat Protection, today introduced three industry-first breakthroughs to protect organizations against advanced threats and malware. Version 7.0 of the Bit9 security suite—which is available worldwide—delivers trust-based security that goes far beyond traditional whitelisting (a list of trusted software) and application control (stopping untrusted software). The industry firsts and enhancements in v7.0 include:
The first security solution to deliver IT- and cloud-driven trust: Bit9’s latest release enables IT organizations to create trust policies that leverage the trust ratings in Bit9’s cloud-based reputation service, the Global Software Registry™ (GSR), the largest database of trust ratings in the world, with 6 billion records indexed. This capability enables end users to install software without involvement from IT as long as the software has a sufficiently high trust rating from Bit9. This cuts administrative overhead and user impact by up to 40 percent, reducing both cost and effort. When combined with the ability to create specific IT-driven trust policies, Bit9 customers enjoy the lowest administrative overhead and user impact of any application control/whitelisting solution.
The first trust-based application control solution optimized for virtualized environments: Many organizations believe virtual environments are inherently secure because they can be reimaged each day. That fallacy creates a major security gap because 85 percent of advanced threat attacks do their damage within minutes, according to the Verizon 2012 Data Breach investigations Report. Bit9’s new features eliminate repeated disk scans, multiple initializations of cloned virtual machines, problematic gold image updates, and other issues that plague traditional application control products in virtualized environments. This new release delivers the highest security, performance and reliability for all virtualized environments including virtual desktop infrastructure (VDI), server virtualization and terminal services/session virtualization.
The first application control solution with the features, scalability and integration to protect the largest enterprises: With support for up to 250,000 endpoints per Bit9 server, v7.0 is the first application control solution that scales to meet the needs of organizations of any size. It now includes roles-based access control to make it easy and effective to administer within existing team structures and groups. Through open APIs and prebuilt integrations, Bit9’s solution also interoperates with existing security solutions, including SEIMS, log management systems, software delivery tools, patch management products, and ticketing systems.
Enhanced server security: Servers are the target of advanced threats because that's where an organization's intellectual property resides. Bit9 delivers enhanced memory protection, file integrity monitoring and device control to provide a single trust-based application control solution across all enterprise systems—servers, desktops and laptops.
Organizations of all types and sizes use the Bit9’s trust-based security approach as a key element in dealing with all aspects of advanced threats and malware, including incident response, forensics, detection and protection. Bit9 today also announced the new Bit9 Managed Administrative Service (see news release [link to title]), which enables organizations to outsource the day-to-day operations of administering trusted software to Bit9, while retaining overall control of their corporate security policies...
Further Reading
Bit9's dashboard for information technology (IT) department tracks files, softrware, and "drift" from standard configurations plus provides a panic button (lower left) that locks down all connected systems to High Enforcement Level.
Here is what EETimes says about Bit9: As cyber security threats diversify, the most advanced solutions are upending the detection paradigm—from removing malicious software to installing only trusted software in the first place. Once considered too cumbersome for everyday use by IT departments, trust-based security—called application control—is now ready for mainstream IT departments, cloud deployments and virtualized environments, according to security software provider Bit9 Inc...
Bit9's database of known good apps can be access with its Parity Knowledge Service which evaluates whether software is trustworthy, here rejecting a file of unknown origin.
Here is what Bit9 says: Bit9, the global leader in Advanced Threat Protection, today introduced three industry-first breakthroughs to protect organizations against advanced threats and malware. Version 7.0 of the Bit9 security suite—which is available worldwide—delivers trust-based security that goes far beyond traditional whitelisting (a list of trusted software) and application control (stopping untrusted software). The industry firsts and enhancements in v7.0 include:
The first security solution to deliver IT- and cloud-driven trust: Bit9’s latest release enables IT organizations to create trust policies that leverage the trust ratings in Bit9’s cloud-based reputation service, the Global Software Registry™ (GSR), the largest database of trust ratings in the world, with 6 billion records indexed. This capability enables end users to install software without involvement from IT as long as the software has a sufficiently high trust rating from Bit9. This cuts administrative overhead and user impact by up to 40 percent, reducing both cost and effort. When combined with the ability to create specific IT-driven trust policies, Bit9 customers enjoy the lowest administrative overhead and user impact of any application control/whitelisting solution.
The first trust-based application control solution optimized for virtualized environments: Many organizations believe virtual environments are inherently secure because they can be reimaged each day. That fallacy creates a major security gap because 85 percent of advanced threat attacks do their damage within minutes, according to the Verizon 2012 Data Breach investigations Report. Bit9’s new features eliminate repeated disk scans, multiple initializations of cloned virtual machines, problematic gold image updates, and other issues that plague traditional application control products in virtualized environments. This new release delivers the highest security, performance and reliability for all virtualized environments including virtual desktop infrastructure (VDI), server virtualization and terminal services/session virtualization.
The first application control solution with the features, scalability and integration to protect the largest enterprises: With support for up to 250,000 endpoints per Bit9 server, v7.0 is the first application control solution that scales to meet the needs of organizations of any size. It now includes roles-based access control to make it easy and effective to administer within existing team structures and groups. Through open APIs and prebuilt integrations, Bit9’s solution also interoperates with existing security solutions, including SEIMS, log management systems, software delivery tools, patch management products, and ticketing systems.
Enhanced server security: Servers are the target of advanced threats because that's where an organization's intellectual property resides. Bit9 delivers enhanced memory protection, file integrity monitoring and device control to provide a single trust-based application control solution across all enterprise systems—servers, desktops and laptops.
Organizations of all types and sizes use the Bit9’s trust-based security approach as a key element in dealing with all aspects of advanced threats and malware, including incident response, forensics, detection and protection. Bit9 today also announced the new Bit9 Managed Administrative Service (see news release [link to title]), which enables organizations to outsource the day-to-day operations of administering trusted software to Bit9, while retaining overall control of their corporate security policies...
Further Reading
Monday, October 01, 2012
#CHIPS: "Laser-spike annealing could boost litho"
Laser-spike annealing could speed the processing time while simultaneously decreasing the variability of advanced semiconductor wafers, according to the Semiconductor Research Corp. and Cornell University: R. Colin Johnson
Here is what EETimes says about laser-spike annealing: A new type of annealing developed by researchers at Cornell University promises the potential to shorten processing time and improve image quality of semiconductor lithography.
Laser-spike annealing (LSA), developed by Cornell researchers backed by Semiconductor Research Corp. (Research Triangle, N.C.) , has already been tested for both 193-nanometer immersion lithography and 13-nm extreme ultra violet (EUV). The technique is currently being considered for adoption by SRC members, including IBM Corp., Texas Instruments Inc., Intel Corp., Advanced Micro Devices Inc., Freescale semiconductor Inc. and Globalfoundries Inc.
Further Reading
Here is what EETimes says about laser-spike annealing: A new type of annealing developed by researchers at Cornell University promises the potential to shorten processing time and improve image quality of semiconductor lithography.
Laser-spike annealing (LSA), developed by Cornell researchers backed by Semiconductor Research Corp. (Research Triangle, N.C.) , has already been tested for both 193-nanometer immersion lithography and 13-nm extreme ultra violet (EUV). The technique is currently being considered for adoption by SRC members, including IBM Corp., Texas Instruments Inc., Intel Corp., Advanced Micro Devices Inc., Freescale semiconductor Inc. and Globalfoundries Inc.
Further Reading
Subscribe to:
Posts (Atom)