• in English
  • The white paper 'Smart Sensors – Enabling the Intelligent Internet of Things' discusses the key trends in sensor development as well as how sensing technologies such as radar, LiDAR, and time-of-flight imaging provide even more information by allowing systems to perceive objects in 3D space. This blog is an extension of that, focusing on how to utilise the pixel aperture approach to extract depth information from a CMOS image sensor and create normal 2D pictures, as well as pixel circuits and how to overcome the obstacles of a 3D image sensing system.

    Today, 3D images and videos have become popular in our daily life. How to ingest and combine 3D content with 2D content is a very important topic. In general, two two-dimensional images from two different perspectives can be used to create a three-dimensional image, as this mechanism mimics the perception of the human eye.

    For example, a 3D camera should record two images at the same time and display two images that are visible to the right and left eyes of humans for 3D perception. This perception can be viewed as a passive approach. On the other hand, the active mode can be used with the light source to feel the depth of the object. Based on the properties of the reflected light, a three-dimensional image can be created through post-processing calculations. In particular, time-of-flight activity (TOF) is an estimate of the travel time of parallel light that is emitted by a light source, reaches an object, is reflected by an object, and reaches a sensor. The depth of objects can be easily deduced from the different travel times recorded in pixels.

    Two-dimensional and three-dimensional image sensors are combined on the basis of the same light-emitting diodes. In two-dimensional and three-dimensional modes, the correlated duplex sampling circuit (CDS) and the time-to-digital converter (TDC) are adopted or changed. Reading circuits use linear and parallel reading in 2D and 3D modes, respectively. Accordingly, a multi-channel TDC is used to achieve parallel reading.

    2D/3D integrated image sensor

    To effectively integrate 2D and 3D image sensors, the P-diffusion_N-well_ P-substrate photodiode is adopted and controlled to operate in 2D and 3D photo detection modes. In a 2D mode, the CDS circuit and the CDS read circuit are designed, as well as a row decoder, a column decoder and a controller. In a 3D mode, Sense Amplifiers (SA), a TDC reading and a TDC are implemented. The SA increases a trigger pulse by one pixel to reduce the read time from one pixel to a TDC. Figure 1 shows the block diagram of the 2D / 3D integrated image sensor.

    Block diagram of Image sensor
    Figure 1: Block diagram of the 2D/3D integrated image sensor

    Pixel Circuits

    In a 2D mode, the main goal is to acquire information about the grey level and therefore the dynamic range is the main factor. A greater dynamic range reveals a greater range of light currents that are detected. The 2D mode pixel circuit, shown in Figure 2, has an additional path to slow down the charge saturation. One such path provides charge-supply to compensate for the current photo dissipation.

    2D Pixel circuit
    Figure 2: 2D Pixel circuit

    In 3D mode, the pixel circuit detects objects with depth information. The photodiode is inversely polarized near the avalanche. Once the photodiode detects a photon, it induces a large current. This is called Geiger mode. This phenomenon causes the pixel circuit to quickly detect photons. Figure 3 shows the 3D pixel circuit.

    3D Pixel circuit
    Figure 3: 3D Pixel circuit

    When the pixel circuit accepts a light trigger, the photodiode generates a large current of light which passes through the PMOS transistor, M1, which acts as a resistor. The photodiode node N produces a voltage drop which is accelerated by an inverter. During the reset, the pixel is operated in a charging phase in which the photodiode node N is biased to Vdd. The PMOS transistor, M4, which connects to an inverter, and the NMOS transistor, M5, which connects to ground form a pull-down feedback path. After rebooting, the photodiode starts detecting photos and performs a discharge action. Meanwhile, M4 and M5 are turned on to quickly decrease the voltage of the inverter to 1 / 2Vdd. Figure 4 shows the integrated 2D / 3D pixel circuit which can be easily manipulated and switched using 2D and 3D control signals.

    2D/3D integrated pixel circuit
    Figure 4: 2D/3D integrated pixel circuit

    Multi-channel TDC

    A 3D image detection system faces a problem in the number of timing circuits. To use traditional timing circuits, each pixel with its corresponding timing circuit for depth calculation has a number of drawbacks: too many timing circuits representing a large hardware area and high power consumption. Therefore, the multi-channel TDC consisting of a ring TDC, a thermal encoder and a 4-bit counter, is used to solve the aforementioned drawbacks. A 15-stage ring TDC is designed as the core of a multi-channel timing circuit. When the start signal is active, a NAND gate and 14 inverters form an oscillation. 15 outputs of a ring TDC are compressed by a thermal encoder to produce a fine 4-bit result which is stored in a latch matrix. At the same time, the counter produces an approximate 4-bit result which is also stored in the latch array. The 4-bit header and fine results can interpret depth information.

    Multichannel TDC
    Figure 5: Multichannel TDC

    Operation of the sensor

    During a 3D measurement, the external signal resets the pixel circuit, oscillates the TDC and triggers light emitting. The sensor waits for light reflected from objects and calculates the travel time based on the TDC. The object depth can be determined based on the measured travel time.

    Sensor operation
    Figure 6: Sensor operation

    Here an FPGA board is programmed to control the scalability of the system for recording 2D and 3D images based on the TOF sensor. The depth of the cylindrical box is calculated based on the measured time of flight of the 850 nm light-emitting diodes. The 2D/3D images are generated by the software running on the PC.

    To learn more about the smart sensors and key trends in sensor development, read our whitepaper on 'Smart Sensor-enabling the intelligent Internet of Things'

    Stay informed

    Keep up to date on the latest information and exclusive offers!

    Subscribe now

    Data Protection & Privacy Policy

    Thanks for subscribing

    Well done! You are now part of an elite group who receive the latest info on products, technologies and applications straight to your inbox.

    Technical Resources

    Articles, eBooks, Webinars, and more.
    Keeping you on top of innovations.