Thursday, February 28, 2013
"Patents and innovation are a critical component of Aptina’s strategy, and Aptina’s patent portfolio is the largest and strongest in the image sensor industry," said Bob Gove, President and CTO of Aptina. "We believe that this powerful blend will advance technology to realize our goal of enabling consumers to capture beautiful images and visual information."
The second review comes from Samsung Securities and talks about 13MP camera phones supply chain, quite a complex multi-source one:
Wednesday, February 27, 2013
The chip offers a hardware solution to some important problems in computational photography, says Michael Cohen at Microsoft Research in Redmond, Wash. "As algorithms such as bilateral filtering become more accepted as required processing for imaging, this kind of hardware specialization becomes more keenly needed," he says.
The power savings offered by the chip are particularly impressive, says Matt Uyttendaele, also of Microsoft Research. "All in all [it is] a nicely crafted component that can bring computational photography applications onto more energy-starved devices," he says.
The work was funded by the Foxconn Technology Group, based in Taiwan
Toward production of the new camera modules sometime in 2014, Toshiba is working to improve its algorithms, to attain better speed and distance accuracy as well as lessen the load on a phone’s central processor.
|The image from Toshiba light field camera shows the distance of|
two dolls by color, with the scale marked in millimeters
Update: Toshiba Review, Oct. 2012 published a paper on this camera (in Japanese).
Tuesday, February 26, 2013
Attempts to advance the standard Bayer pattern, which has been the standard since the 1970s, have not been successful until now. Aptina Clarity+ technology combines CFA, sensor design and algorithm developments and is fully compatible with today’s standard sub-sampling and defect correction algorithms, which means no visible imaging artifacts are introduced. While BSI technology innovation drove 1.4um pixel adoption, Clarity+ technology will drive 1.1um pixel adoption into the smartphone market. Additionally, this technology works seamlessly with 4th generation Aptina MobileHDR technology, increasing DR for both snapshot and video captures by as much as 24dB.
"Clarity+ technology enables a substantial improvement in picture clarity when capturing images with mobile cameras. While many others have introduced technologies that increase the sensitivity of cameras with clear pixel or other non-Bayer color patterns and processing, Aptina’s Clarity+ technology achieves a doubling of sensitivity, but uniquely without the introduction of annoying imaging artifacts," said Bob Gove, President and CTO at Aptina. "This optimized performance is enabled by innovations in the sensor and color filter array design, along with advancements in control and image processing algorithms. Our total system approach to innovation has led prominent OEMs to confirm that Clarity+ technology has clearly succeeded in producing a new level of performance for 1.1um pixel image sensors. We see Clarity+ technology playing a significant role in our future products, with initial products in the Smartphone markets."
Aptina will incorporate this innovative technology into a complete family of 1.1um and 1.4um based products addressing both front and rear facing applications. The AR1231CP 12MP, 1.1um mobile image sensor is the first sensor to support Clarity+ technology. This 1/3.2-inch BSI sensor 60fps at full resolution and supports 4K video at 30fps. The Aptina AR1231CP is now sampling.
Here is how the world looks in 900-1700nm band:
A Youtube video shows the DR:
The chip is not the first to perform random-pixel summing electronically, but it is the first to capture many different random combinations simultaneously, doing away with the need to take multiple images for each compressed frame. This is a significant accomplishment, according to other experts. "It’s a clever implementation of the compressed-sensing idea," says Richard Baraniuk, a professor of electrical and computer engineering at Rice University, in Houston, and a cofounder of InView Technology.
Monday, February 25, 2013
"Today’s compact mainstream sensors are only able to capture a fraction of what the human eye can see," said Dr. Martin Scott, CTO at Rambus. "Our breakthrough binary pixel technology enables a tremendous performance improvement for compact imagers capable of ultra high-quality photos and videos from mobile devices."
This binary pixel technology is optimized at the pixel level to sense light similar to the human eye while maintaining comparable form factor, cost and power of today’s mobile and consumer imagers. The sensor is said to be optimized at the pixel level to deliver DSLR-level dynamic range from mobile and consumer cameras. The Rambus binary pixel has been demonstrated in a proof-of-concept test-chip and the technology is currently available for integration into future mobile and consumer image sensors.
Benefits of binary pixel technology:
- Improved image quality optimized at the pixel level
- Single-shot HDR photo and video capture operates at high-speed frame-rates
- Improved signal-to-noise performance in low-light conditions
- Extended dynamic range through variable temporal and spatial oversampling
- Silicon-proven technology for mobile form factors
- Easily integratable into existing SoC architectures
- Compatible with current CMOS image sensor process technology
Rambus published a demo video showing its Digital Pixel capabilities.
- Compound Optics and Imaging
- Motion detection and Circuits
- Optic Flow
ST’s solution is an infra-red emitter that sends out light pulses, a fast light detector that picks up the reflected pulses, and electronic circuitry that accurately measures the time difference between the emission of a pulse and the detection of its reflection. Combining three optical elements in a single compact package, the VL6180 is the first member of ST’s FlightSense family and uses the ToF technology.
"This marks the first time that Time-of-Flight technology has been made available in a form factor small enough to integrate into the most space-constrained smartphones," said Arnaud Laflaquière, GM of ST’s Imaging Division. "This technology breakthrough brings a major performance enhancement over existing proximity sensors, solving the face hang-up issues of current smartphone and also enabling new innovative ways for users to interact with their devices."
Update: ST backgrounder says that ST roadmap includes 2D and 3D ToF sensors. No other detail is disclosed.
"Designers choosing the AR0261 will find new and exciting ways to enable users to interact with their favorite devices," said Aptina’s Roger Panicacci, VP of Product Development. "Gesture recognition, 3-D and HDR will open a whole new world of personal interaction with these devices."
The AR0261 is sampling now and will be in production by summer of 2013.
Business Wire: Aptina announces 12MP AR1230 and 13MP AR1330 mobile sensors featuring 1.1um pixels. Both sensors feature 4th generation MobileHDR technology which is said to increase DR as much as 24dB. The AR1230 captures 4K video at 30fps as well as 1080P video up to 96fps. The AR1330 provides electronic image stabilization support in 1080P mode while capturing video in both 4K and 4K Cinema formats at 30fps. Additionally, both sensors support advanced features like super slow motion video, new zoom methodologies, computational array imaging and 3D image capture.
"Built on Aptina’s smallest, and most advanced 1.1-micron pixel technology, the AR1230 and AR1330 image sensors provide the high resolution, impressive low-light sensitivity, and advanced features manufacturers of high end Smartphone are looking for," said Gennadiy Agranov, Pixel CTO and VP at Aptina. "These sensors are the first of a family of new, high quality 1.1-micron based products being sampled by our customers, and which Aptina will be delivering in the coming years."
The AR1230 is now in production and is available for mass production orders immediately. The TSMC-manufactured AR1330 is sampling now and will be in production by summer of 2013.
- "A 3D vision 2.1 Mpixels image sensor for single-lens camera systems", by S.Koyama of Panasonic
- "A 187.5 uVrms read noise 51 mW 1.4 Mpixel CMOS image sensor with PMOSCAP column CDS and 10b self-differential offset-cancelled pipeline SAR-ADC", by J. Deguchi of Toshiba
Toshiba has developed three key technologies to overcome these challenges:
- Column CDS circuits primarily made up of area-efficient PMOS capacitors. The area of the CDS circuits is reduced to about half that of conventional circuits.
- In the readout circuits, a level shift function is simultaneously achieved by a capacitive coupling through the PMOS capacitors, allowing adjustment of the signal dynamic range between the column CDS circuits and the PGA and the ADC. This achieves low power and low voltage implementation of the PGA and ADC, reducing their power consumption by 40%.
- Implementation of a low power switching procedure in the ADC suited to processing the pixel signals of CMOS image sensors. This reduces the switching power consumption of the ADC by 80%.
Update: PRWeb: Pelican Imaging will be giving private demonstrations in Barcelona, February 25-28, 2013. Pelican Imaging’s camera is said to be 50% thinner than existing mobile cameras, and allows users to perform a range of selective focus and edits, both pre- and post-capture.
"Our technology is truly unique and radically different than the legacy approach. Our solution gives users a way to interact with their images and video in wholly new ways," said Pelican Imaging CEO and President Christopher Pickett. "We think users are going to be blown away by the freedom to refocus after the fact, focus on multiple subjects, segment objects, take linear depth measurements, apply filters, change backgrounds, and easily combine photos, from any device."
Sunday, February 24, 2013
Papers will be presented on the following topics:
- Status and plans for astronomical facilities and instrumentation (ground & space)
- Earth and Planetary Science missions and instrumentation
- Laboratory instrumentation (physical chemistry, synchrotrons, etc.)
- Detector materials (from Si and HgCdTe to strained layer superlattices)
- Sensor architectures – CCD, monolithic CMOS, hybrid CMOS
- Sensor electronics
- Sensor packaging and mosaics
- Sensor testing and characterization
- Mark McCaughrean & Mark Clampin "Space Astronomy Needs"
- Bonner Denton "Laboratory Instrumentation"
- Roland Bacon "MUSE - Example of Imaging Spectroscopy"
- Ian Baker & Johan Rothman "HgCdTe APDs"
- Jean Susini "Synchrotrons"
- Rolf Kudritzki "Stellar Astrophysics: Perspectives on the Evolution of Detectors"
- Jim Gunn "Why Imaging Spectroscopy?"
- Harald Michaelis "Planetary Science"
- Robert Green "Imaging Spectroscopy for Earth Science and Exoplanet Exploration"
Thanks to AT for the link!
- L. Braga (FBK, Trento) "An 8×16 pixel 92kSPAD time-resolved sensor with on-pixel 64 ps 12b TDC and 100MS/s real-time energy histogramming in 0.13 um CIS technology for PET/MRI applications"
- C. Niclass (Toyota) "A 0.18 um CMOS SoC for a 100m range, 10 fps 200×96 pixel Time of Flight depth sensor"
- O. Shcherbakova (University of Trento) "3D camera based on linear-mode gain-modulated avalanche photodiodes"
Saturday, February 23, 2013
Foster, who came to Johns Hopkins in 2010, works in the area of non-linear optics and ultra-faster lasers – measuring phenomena that occur in femtoseconds. "With this project, we hope to create the fastest video device ever created," he explained.
Unfortunately, Mark Foster's home page has no explanation on how the new camera works. His publications page mostly links to high-speed optical communication papers, rather than imaging.
|DML=Digital MicroLens, made with patterns smaller|
Panasonic plans to use this technology in industrial and mobile products appearing in 2014.
Business Wire: Aptina says that its MobileHDR technology enables Chimera architecture support. MobileHDR technology is integrated into a number of Aptina’s high-end current and future mobile products including the AR0835 (8MP), AR1230 (12MP), and the AR1330 (13MP).
|A point of the finger is all it takes to send the defect|
in the paint to the QS inspection system, store it and document it.
Friday, February 22, 2013
Olympus presented "A rolling-shutter distortion-free 3D stacked image sensor with -160 dB parasitic light sensitivity in-pixel storage node", by J. Aoki
Sony presented "A 1/4-inch 8M pixel back-illuminated stacked CMOS image sensor" by S. Sukegawa
From what Albert writes, I'd bet that Olympus was granted an early access to Sony stacked sensor process, so that both presentations rely on the same technology.
Sony does not tell the exact details of its TSV processing, but shows the stacked chips cross-section:
The sensor spec slide mentions 5Ke full well - very impressive for a 1.12um pixel:
Tech-On article shows many more slides from the presentation.
The key statements in the report:
- The global image sensors market is estimated to grow at a modest CAGR of 3.84% from 2013 till 2018 and is expected to cross $10.75 billion by the end of these five years.
- Currently, camera mobile phones are the major contributors to this market. In 2012, approximately, 80% of the image sensors were shipped for this application
- Emerging applications such as medical imaging, POV cameras, UAV cameras, digital radiology, and factory automation factors are driving the market with their high growth rate. However, they are low volume applications and would require longer duration to impact the market heavily
- CMOS technology commands a major market share against CCD and contact image sensor types. In 2012, CMOS approximately had 85% of the share
- Apart from the visible spectrum domain, manufacturers are also focusing on the infrared and X-ray image sensors. By the year 2018, approximately 800,000 units of infrared sensors and 300,000 units of X-ray sensors are forecasted to be shipped
- Currently, North America holds the largest share, but APAC is expected to surpass in 2013 with its strong consumer electronics market
|Image sensor sales by region|
Thursday, February 21, 2013
"A 3.4 uW CMOS image sensor with embedded feature-extraction algorithm for motion-triggered object-of-interest imaging" by J. Choi
"A 467 nW CMOS visual motion sensor with temporal averaging and pixel aggregation" by G. Kim
In order to make a reliable recognition of a watcher, the STB camera needs to have quite a high resolution and a good low light sensitivity. If Intel's approach is widely adopted, STBs might become a next big market for image sensors.
8M units in the first 60 days of sales (Nov-Dec 2010).
18M in a year from the launch (Jan. 2012, Reuters)
So, it appears to be a significant slowdown in Kinect sales.
Wednesday, February 20, 2013
The aligner can position an image sensor with accuracy of 1.5um in X-Y direction, 3um in depth, 0.1arc-minute in tilt and 3 arc-minute in rotation:
The 2-axes (Pitch, Yaw) Blur Vibration Simulator is said to be compliant with CIPA rules (CIPA DC-011-2012 "Measurement method and notation method related with offset function of blur vibration of digital camera"):
Tuesday, February 19, 2013
DOC’s mems|cam modules are said to deliver industry leading AF speed. A fast settling time (typically less than 10 ms) combined with precise location awareness results from MEMS technology. DOC’s MEMS autofocus actuators operate on less than 1mW of power(roughly 1% of VCM), thereby extending battery life and reducing thermal load on the image sensor, lens, and adjacent critical components. Manufactured with semiconductor processes, DOC’s silicon actuators deliver precise repeatability, negligible hysteresis, and millions of cycles of longevity, ensuring high quality image and video capture for the life of a product. Combined with DOC’s optics design and flip chip packaging, the camera module z-height is just 5.1mm.
DigitalOptics is initially targeting smartphone OEMs in China for its mems|cam modules. "Smartphone OEMs in China are driving innovative new form factors, features, and camera functionality," said Jim Chapman, SVP sales and marketing at DigitalOptics Corporation. "These OEMs recognize the speed, power, and precision advantages of mems|cam relative to existing VCM camera modules."
"We have a strategic relationship with DigitalOptics for mems|cam modules, having recognized the potential advantages of implementing a mems|cam module into our handsets," said Zeng Yuan Qing, vice general manager of Guangdong Oppo Mobile Telecommunications Corp., Ltd, a Chinese smartphone OEM.
Thanks to SF and LM for the links!
The camera spec does not tell how many pixels the camera has, saying instead that the sensor is of BSI type, has 2.0um pixels and 1/3-inch format. The camera has F2.0 lens with OIS. Both front and back cameras support HDR in stills and video mode.
Once we talk about HDR, the new Nvidia's Tegra 4i is the second application processor that supports "always-on HDR" imaging.
Update: CNET published HTC's director of special projects Symon Whitehorn comments on why HTC flagship smartphone has only 4MP resolution: "It's a risk, it's definitely a risk that we're taking. Doing the right thing for image quality, it's a risky thing to do, because people are so attached to that megapixel number."
HTC Ultrapixel page shows few illustrations:
|HTC One vs iPhone 5 comparison|
HTC marketing efforts are aimed against high megapixel sensors:
Typical Smartphone CameraDifference
|HTC Zoe™ Camera with UltraPixels|
|Lens with F2.0 aperture||Lens with F2.8 aperture||1.96x more light entry than F2.8|
|2.0um pixel size||~1.4um pixel size (on typical 8MP sensors)|
~1.1um pixel size (on typical 13MP sensors)
|2.04x more sensitivity than 1.4um|
3.31x more sensitivty than 1.1um
|2-axis optical image stabilizer||2-axis optical image stabilizer||Allowing longer exposure with more stability, resulting in higher quality photos with lower noise and better lowlight sensitivity|
HDR for video (~84db)
|No video HDR (~54db)||~1.5x more dynamic range with 84db compared to 54db in competitions|
HTC One camera spec:
|Sensor Type||CMOS BSI|
|Sensor Pixel Size||2um X 2um|
|Camera Full Size Resolution||2688 x 1520 16:9 ratio|
Shutter speed up to 48fps with reduced motion blur
|Video Resolutions||1080P up to 30fps|
720P up to 60fps
1080P with HDR up to 28fps
768x432 up to 96fps
H.264 high profile, up to 20mbps
|Focal Length of System||3.82 mm|
|Optical F/# Aperture||F/2.0|
|Number of Lens Elements||5P|
|Optical Image Stabilizer||2-axis, +/- 1 degree (average), 2000 cycles per second|
|ImageChip / ISP Enhancements||HTC continuous autofocus algorithm (~200ms), De-noise algorithm, color shading for lens compensation|
|Maximum frames per second||Up to 8fps continuous shooting|
Monday, February 18, 2013
Sunday, February 17, 2013
Saturday, February 16, 2013
Q: You have been involved in CMOS development since the beginning at JPL, what are the key development milestones you have witnessed?
A: "I think the key development milestones were; An introduction of the pinned photodiode (that was commonly used for the interline transfer CCDs) to the CMOS pixel, a shared pixel scheme for pixel size reduction and improvements in FPN suppression circuitry.
By the way, I consider then R&D efforts on on-chip ADC at JPL (led by Dr. Eric Fossum) was the origin of the current success of the CMOS image sensor.
Another aspect is; Big semiconductor companies acquired key start-up companies in late 1990's - early 2000's to establish CMOS image sensor business quickly. Examples include; STMicro acquired VVL, Micron acquired Photobit and Cypress acquired FillFactory.
Also, the biannual Image Sensor workshop has been playing an important role to share and discuss progress in CMOS imaging technologies."
Q: CMOS performance continues to improve with each new generation - what's the current R&D focus at Aptina?
A: "High-speed readout, while improving noise performance together, is the strength in our design. We will continue to focus on this. At the same time, our R&D group in the headquarter in San Jose focuses on pixel performance improvements."
Friday, February 15, 2013
|TSMC Stacked Sensor (features not in scale)|
Thursday, February 14, 2013
Wednesday, February 13, 2013
|Cross Section Showing the TSVs|
Chipworks says: "A thin back-illuminated CIS die (top) mounted to the companion image processing engine die and vertically connected using a through silicon via (TSV) array located adjacent to the bond pads. A closer look at the TSV array in cross section shows a series of 6.0 µm pitch vias connecting the CIS die and the image processing engine."
Thanks to RF for updating me!
The IVP is extremely power efficient. As an example, for IVP implemented in an automatic synthesis, P&R flow in 28nm HPM process, regular VT, a 32-bit integral image computation on 16b pixel data at 1080p30 consumes 10.8 mW. The integral image function is commonly used in applications such as face and object detection and gesture recognition.
Tensilica also announced a number of alliances with software vendors Dreamchip, Almalence, Irida Labs, and Morpho.
|The IVP Core Architecture|
With Sample Memory Sizes Selected
EETimes talks about a battle between Tensilica and Ceva imaging IP cores. No clear winnder is declared, but Tensilica IVP is said to have more processing power. Another EETimes article talks about ISP IP applications and requirements.
"InVisage is poised to make a tremendous impact on consumer devices and end users with its QuantumFilm image sensors," said Thomas Ng, founding partner, GGV Capital.
"The innovative QuantumFilm technology from InVisage has the potential to disrupt the market for silicon-based image sensors," said Bo Ilsoe, managing partner of Nokia Growth Partners. "Imaging remains a core investment area for NGP, and it is our belief that InVisage's technology will change how video and images are captured in consumer devices."
"The participation of new investors, including a major handset maker, in this round signals that imaging is a critical differentiator in mobile devices," said Jess Lee, Invisage CEO. "For too long, the image sensor industry has lacked innovation. We are excited to bring stunning image quality and advanced new features that will truly transform this industry."
InVisage QuantumFilm is said to be the world's most light-sensitive image sensor for smartphones. Compared to current camera technologies, the QuantumFilm is said to provide incredible performance in the smallest package, making picture-taking foolproof, even in dimly lit rooms.
Tuesday, February 12, 2013
Q: sCMOS claims many advantages compared to more traditional technologies, are there any drawbacks or areas for further research?
A: "I would say that nature doesn't make it easy for you. Certainly there are features that can be improved, for example the blocking efficiency or shutter ratio can be improved, as well some cross talk and as well some lag issues could be improved, but I guess sCMOS has this in common with every new technology. I don't know any new technology, which was perfect from the start."
Q: This technology has been on the market for several years now, what have you learned over this time, and how have you optimized your camera systems?
A: "Since many of our cameras are usually used for precise measurements, we know and learned a lot about the proper control of these image sensors and how each camera has to be calibrated and each pixel has to be corrected. I will address in my presentation some of these issues and show how we have solved it. Further there some characteristics in the noise distribution, that has to be considered."
Ziv Attar, CEO of Linx Imaging talks about multi-aperture imaging:
Q: There's a lot of discussion around multi-aperture imaging right now - the concept has been around for a long time, why do you think it's a hot topic right now?
A: "...Sensors, optics and image processors have been around for quit some time now yet no array camera has been commercialized. ...Multi aperture cameras require heavy processing which was not available on mobile devices until now. 20 years ago we would have needed a super computer to process an image from a multi aperture camera. I think the timing is right due to a combination of technology matureness and market demand..."
Q: Will we see your technology in a commercialized form soon?
A: "Yes. You will. We are devoting all our resources and energy in to commercializing our technology. There are plenty of challenges related to manufacturing of the optics, sensors, module assembly and software optimization, all which require time, hard work and plenty of creativity which is what makes our life fun."
Saturday, February 09, 2013
PR.com: CTO, Charles McGrath states that, "we can get the retail price of the Mμ Thermal Imager down to $325 initially and with enough orders from the big box retailer we think we may even cut that price considerably. Another goal is to double the resolution within a year."
Friday, February 08, 2013
"DOC has a unique and differentiated MEMS approach to smartphone camera modules that I believe has the potential to revolutionize mobile imaging," said Thode. "I look forward to working with the team to deliver MEMS autofocus camera modules to market and to build on DOC’s emerging role in this exciting space."
Thode, 55, was most recently EVP and GM at McAfee. Before that, Thode was GM of Dell’s Mobility Products Group where he was responsible for leading the strategy and development of Dell’s nontraditional products, including smartphones and tablets. Prior to Dell, Thode was president and CEO at ISCO, a telecom infrastructure company. Thode also spent 25 years with Motorola in various management roles, including GM of its UMTS Handset Products and Personal Communications Sector and GM of its Wireless Access Systems Division in its General Telecoms Systems Sector.
Business Wire: Tessera announces Q4 and 2012 earnings. In Q4 DOC revenue was $10.2M, compared to $7.7M a year ago. The increase was due primarily to sales of fixed focus camera modules of $3.8M that occurred in 2012 but did not occur in 2011, which was partly offset by lower revenues from the company’s image enhancement technologies and weaker demand for the company’s Micro-Optics products.
Fore the full year 2012, DOC revenue was $41.1M.
Thanks to SF for the links!
"It is truly an honor to be recognized at this level by my fellow engineers," says Fossum. "I am regularly astonished by the many ways the technology impacts people’s lives here on Earth through products that we didn’t even imagine when it was first invented for NASA. I look forward to continuing to teach and work with the students and faculty at Dartmouth to explore the next generation of image-capturing devices."
Fossum has published more than 250 technical papers and holds over 140 U.S. patents. He is a Fellow member of the IEEE and a Charter Fellow of the National Academy of Inventors. He has received the IBM Faculty Development Award, the National Science Foundation Presidential Young Investigator Award, and the JPL Lew Allen Award for Excellence.
"This is the highest honor the engineering community bestows," says Thayer School Dean Joseph Helble, "It recognizes Eric’s seminal contributions as an engineer, technology developer, and entrepreneur. His work has enabled microscale imaging in areas that were unimaginable even a few decades ago, and has led directly to the cellphone and smartphone cameras that are taken for granted. We are honored to have someone of his caliber to oversee our groundbreaking Ph.D. Innovation Program."
"Our CMOS image sensors delivered a phenomenal growth during the fourth quarter of 2012, mainly due to numerous design-wins in smartphone, tablet, laptop and surveillance applications. We currently offer mainstream and entry-level sensor products with pixel counts of up to 5 mega and are on track to release a new 8 mega pixel product soon. However, the Q1 prospect for CMOS image sensor looks gloomy as China market is going through correction and many of the customers adopting our new sensor products are still finishing up their product tuning.
Notwithstanding the short-term downturn, we do expect the sales of this product line to surge in 2013, boosted by shipments of our new products, many of which were only launched in the second half of last year. We also expect to break into new and leading smartphone brands and further penetrate the tablet, IP Cam, surveillance and automotive application markets."
The primary camera ISP is Fujitsu Milbeaut MB80645C, typically used in DSCs.
Thursday, February 07, 2013
"The first fab [BSI] chip demonstrated good image quality, the complete BSI process technology is targeted to contribute revenue in 2014. This will serve the market for higher resolution of phone cameras and high performance video cameras."
"Indeed I think the CIS already is a significant portion of our revenue. The CIS, so after – at last year, it is near to between 5% to 7.5% of our revenue coming from the CIS, okay. And BSI, we expect that definitely will enlarge our access to this market. We believe that by fairly 2014, we should have a reasonable amount of our CIS that comes with BSI technology."
The reported annual revenue is $1.7B, thus CIS sales are about $100M. Image sensors are named as one of the main areas that contributed to 2012 revenue growth.
Wednesday, February 06, 2013
Tuesday, February 05, 2013
The new smartphone is expected to be officially announced on Feb. 19.
|The 22 channels of the iQ-LED source|
|The user interface of the control software in prototype stage|
with light set D65
Monday, February 04, 2013
|Location of R-deflectors and detectors in the two-deflector method.|
R-deflectors split colour to form the colours W + R in areas
between neighbouring R-deflectors, and W − R just beneath
This development is described in general terms in the Advance Online Publication version of Nature Photonics issued on February 3, 2013.
The developed technology has the following features:
- Using color alignment, which can use light more efficiently, instead of color filters, vivid color photographs can be taken at half the light levels needed by conventional sensors.
- Micro color splitters can simply replace the color filters in conventional image sensors, and are not dependent on the type of image sensor (CCD or CMOS) underneath.
- Micro color splitters can be fabricated using inorganic materials and existing semiconductor fabrication processes.
- A unique method of analysis and design based on wave optics that permits fast and precise computation of wave-optics phenomena.
- Device optimization technologies for creating micro color splitters that control the phase of the light passing through a transparent and highly-refractive plate-like structure to separate colors at a microscopic scale using diffraction.
- Layout technologies and unique algorithms that allow highly sensitive and precise color reproduction by combining the light that falls on detectors separated by the micro color splitters and processing the detected signals.
Device optimization technologies leading to the creation of micro color splitters that control the phase of the light passing through a transparent and highly-refractive plate-like structure and use diffraction to separate colors on a microscopic scale:
Color separation of light in micro color splitters is caused by a difference in refractive index between a) the plate-like high refractive material that is thinner than the wavelength of the light and b) the surrounding material. Controlling the phase of traveling light by optimizing the shape parameters causes diffraction phenomena that are seen only on a microscopic scale and which cause color separation. Micro color splitters are fabricated using a conventional semiconductor manufacturing process. Fine-tuning their shapes causes the efficient separation of certain colors and their complementary colors, or the splitting of white light into blue, green, and red like a prism, with almost no loss of light.
Layout technologies and unique algorithms that enable highly sensitive and precise color reproduction by overlapping diffracted light on detectors separated by micro color splitters and processing the detected signals:
Since light separated by micro color splitters falls on the detectors in an overlapping manner, a new pixel layout and design algorithm are needed. The layout scheme is combined and optimized using an arithmetic processing technique designed specifically for mixed color signals. The result is highly sensitive and precise color reproduction. For example, if the structure separates light into a certain color and its complementary color, color pixels of white + red, white - red, white + blue, and white - blue are obtained and, using the arithmetic processing technique, are translated into normal color images without any loss of resolution.
2013 International Image Sensors Workshop pre-registration form posted and open. Registration is limited to about 140 attendees on a first-come first-served basis (mostly). On the past workshops, the capacity was filled within few days, so hurry up.
Saturday, February 02, 2013
In addition to conventional detection tasks such as detecting the presence of people or obstacles, the new image sensors provide various types of added functionality including distance measurement and shape detection. They have been designed for use in a wide variety of applications, such as automotive systems for detecting people or obstacles, object detection in semiconductor wafer transfer systems, shape detection by industrial robots, and intruder detection by security systems.
Samples of the new distance linear/area image sensors will be available from February 1, 2013.
|Distance area image sensors (top): S11962-01CR, S11963-01CR|
Distance linear image sensor (bottom): S11961-01CR
The PR in Japanese gives a lot more information (Google translation):
Overview of each type:
Linear type (S11961-01CR)
Linear type arranged in one column element is the industry's first
(Total pixels 272 pixels)
effective pixels 256 pixels long, 20μm pixel pitch, 50μm pixel height
Pixel array part, the sample-and-hold circuit, the horizontal shift register Non-destructive readout capabilities, high dynamic range and low noise
The automatic conveyor automobile and semiconductor wafer to determine the location and distance between, and shape, obstacles and people Automatic control to detect
Area type (S11962-01CR)
Light receiving element that make up the 3D distance image camera
(72 × 72 pixels Total number of pixels)
Number of 64 × 64 pixels effective pixels, 40μm × 40μm pixel size
The pixel array section, the column circuit CDS, horizontal shift register, the vertical shift register, timing
Constituted by the analog circuitry
Industry and care, such as the medical field, the robot operation is highly recognized people and obstacles
In the field of security, the PIN peek at bank ATM, in a crowded escalator
Operation control of security in the report, such as public toilets
Development and a wide range of applications in the field of gesture that recognizes the amusement
Area type (S11963-01CR)
(168 × 128 pixels Number of total pixels)
160 × 120 pixels Number of effective pixels, 30μm × 30μm pixel size
The pixel array, column gain amplifier, horizontal shift register, the vertical shift register, Thailand Timing circuit composed
By column gain amplifier to improve the accuracy of distance to reduce the influence of noise
Non-destructive readout capabilities, high dynamic range and low noise
Compared to the 11962-01CR, for applications that require higher resolution
1. Built-in anti-saturated stable operation even under sunlight
This product, the effect of background light such as a large (approximately 100 000 lux) was the weakness of conventional solar
We are built-saturation measures to reduce. Day or night, indoors or outdoors, small malfunction
Rather than to achieve a high sensitivity measurement can detect a weak signal light, stability can be used.
2. non-destructive readout capability to extend the dynamic range
S11963-01CR linear type and type of area, charge accumulation time is greater when the background light
Short time, when the weak signal that the charge accumulation time is longer when using the
In, we built a non-destructive readout capability to extend the dynamic range to long distances at close range