Friday, September 28, 2012

#WIRELESS: "iPhone-6 Revealed in Apple Patent Filings"

Apple has revealed details that will debut in iPhone-6 next year, including a flexible display which raises the outline of letters for tactile feel when keyboards are present on screen, voice activated authentication so iPhone-6 works only with you as user, and gesture-based recognition to assist with image processing algorithms. The patents, in and other themselves, only reveal Apple's ongoing development of next-generation technologies, but the images used to illustrate reveal that the iPhone--with its familiar "home" button--is the target for these technologies: R. Colin Johnson

Two separate flexible display drawings in Apple's patent filing when re-oriented side-by-side look like a clam-shell rather than two separate concept designs.

Raised letters for on-screen keyboards reveal that the surface of the diaplay will no longer be glass, but a deformable polymer.

Here is what Apples patent applications say: Electronic devices may be provided that contain flexible displays and internal components. An internal component may be positioned under the flexible display. The internal component may be an output device such as a speaker that transmits sound through the flexible display or an actuator that deforms the display in a way that is sensed by a user. The internal component may also be a microphone or pressure sensor that receives sound or pressure information through the flexible display. Structural components may be used to permanently or temporarily deform the flexible display to provide tactile feedback to a user of the device. Electronic devices may be provided with concave displays or convex displays formed from one or more flexible layers including a flexible display layer. Portions of the flexible display may be used as speaker membranes for display-based speaker structures.

Voice authentication insures that the user is really the owner.

Here is what Apples patent applications say: A device can be configured to receive speech input from a user. The speech input can include a command for accessing a restricted feature of the device. The speech input can be compared to a voiceprint (e.g., text-independent voiceprint) of the user's voice to authenticate the user to the device. Responsive to successful authentication of the user to the device, the user is allowed access to the restricted feature without the user having to perform additional authentication steps or speaking the command again. If the user is not successfully authenticated to the device, additional authentication steps can be request by the device (e.g., request a password).

Using hand gestures above the screen, users can filter and touch-up images.

Here is what Apples patent applications say: This disclosure pertains to apparatuses, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image filters, while simultaneously setting auto exposure, auto focus, auto white balance, and/or other image processing technique input parameters based on the appropriate underlying image sensor data in a way that provides a seamless, dynamic, and intuitive experience for both the user and the client application software developer. Such techniques may handle the processing of image filters applying location-based distortions as well as those image filters that do not apply location-based distortions to the captured image data. Additionally, techniques are provided for increasing the performance and efficiency of various image processing systems when employed in conjunction with image filters that do not require all of an image sensor's captured image data to produce their desired image filtering effects.

Further Reading