<p>Researchers at the Centre for Neuroscience (CNS) in the Indian Institute of Science (IISc) have shown how a brain-inspired image sensor can detect minuscule objects like cellular components or nanoparticles invisible to current microscopes.</p>.<p>The technique combines optical microscopy with a neuromorphic camera and machine learning algorithms, equipping the sensor to go beyond the diffraction limit. It presents “a major step forward” in pinpointing objects smaller than 50 nanometers in size, IISc said on Tuesday. The results are published in <span class="italic"><em>Nature Nanotechnology</em></span>.</p>.<p>The diffraction limit prevents optical microscopes from distinguishing between two objects smaller than a certain size (typically 200-300 nanometers).</p>.<p>Deepak Nair, Associate Professor at CNS and corresponding author, said very few have tried to use the detector itself to surpass this limit.</p>.<p>The neuromorphic camera used in the study — roughly 40 mm (height) by 60 mm (width) by 25 mm (diameter), and weighing about 100 grams — mimics the way the human retina converts light into electrical impulses, and has several advantages over conventional cameras, IISc said.</p>.<p>In conventional cameras, each pixel captures the intensity of light falling on it and these pixels are pooled together to reconstruct an image of the object.</p>.<p>In neuromorphic cameras, each pixel operates independently, generating sparse and lower amount of data. The process is similar to how the human retina works, and allows the camera to “sample” the environment with much higher temporal resolution.</p>.<p>Chetan Singh Thakur, Assistant Professor at the Department of Electronic Systems Engineering, IISc, and co-author, said such neuromorphic cameras could go from a very low-light environment to very high-light conditions.</p>.<p>The neuromorphic camera was used to pinpoint individual fluorescent beads smaller than the diffraction limit, by shining high- and low-intensity laser pulses and measuring the variation in fluorescence levels.</p>.<p>Rohit Mangalwedhekar, former CNS research intern and first author, said one of the two methods involved a deep-learning algorithm that accurately locates the fluorescent particles within the frames. The methods allowed the team to spot the object’s precise location with greater accuracy than existing techniques.</p>
<p>Researchers at the Centre for Neuroscience (CNS) in the Indian Institute of Science (IISc) have shown how a brain-inspired image sensor can detect minuscule objects like cellular components or nanoparticles invisible to current microscopes.</p>.<p>The technique combines optical microscopy with a neuromorphic camera and machine learning algorithms, equipping the sensor to go beyond the diffraction limit. It presents “a major step forward” in pinpointing objects smaller than 50 nanometers in size, IISc said on Tuesday. The results are published in <span class="italic"><em>Nature Nanotechnology</em></span>.</p>.<p>The diffraction limit prevents optical microscopes from distinguishing between two objects smaller than a certain size (typically 200-300 nanometers).</p>.<p>Deepak Nair, Associate Professor at CNS and corresponding author, said very few have tried to use the detector itself to surpass this limit.</p>.<p>The neuromorphic camera used in the study — roughly 40 mm (height) by 60 mm (width) by 25 mm (diameter), and weighing about 100 grams — mimics the way the human retina converts light into electrical impulses, and has several advantages over conventional cameras, IISc said.</p>.<p>In conventional cameras, each pixel captures the intensity of light falling on it and these pixels are pooled together to reconstruct an image of the object.</p>.<p>In neuromorphic cameras, each pixel operates independently, generating sparse and lower amount of data. The process is similar to how the human retina works, and allows the camera to “sample” the environment with much higher temporal resolution.</p>.<p>Chetan Singh Thakur, Assistant Professor at the Department of Electronic Systems Engineering, IISc, and co-author, said such neuromorphic cameras could go from a very low-light environment to very high-light conditions.</p>.<p>The neuromorphic camera was used to pinpoint individual fluorescent beads smaller than the diffraction limit, by shining high- and low-intensity laser pulses and measuring the variation in fluorescence levels.</p>.<p>Rohit Mangalwedhekar, former CNS research intern and first author, said one of the two methods involved a deep-learning algorithm that accurately locates the fluorescent particles within the frames. The methods allowed the team to spot the object’s precise location with greater accuracy than existing techniques.</p>