Today was another loonnnnggggg day. Last night we went out to a club to dance and relax after all the straining mental work we’ve been doing. I got to wear a shirt I found in South Philly last night!

Only $20!
Needless to say we had a lot of fun, and I’m going to skip over the rest of the night because it’s not that important.

Thalamic and cortical circuits for vision in the mammalian brain
Diego Contreras
Today was sensory perception day. We started out learning about the anatomy of neurons and we essentially beat the topic to the ground talking about action potentials, dendrites, axon girth, myelination, etc., etc… until we ran out of time and had to move onto the next lecture. The main points were that the thalamus is the relay point for almost all of our sensory perception tasks. Stimuli are first processed where they enter the brain, and then travel through the thalamus, which relocates the information to other areas, and then the information goes back through loops a couple times while it’s processing new information. All our sensory perception is essentially put together and integrated through this looping process. The only sense which does not have a thalamic pathway is olfaction, since olfaction evolved long before the thalamus was present in mammalian brains. 
In future research, Dr. Contreras urged that we keep the thalamus in mind, since without it, there are no sensory processes (or at least, there’s not a functional circuit). Also, myelin = bacon. 
Encoding of Natural Sounds in Auditory Cortex
Maria Geffen
Our second lecture dealt with auditory processing in mice but we didn’t learn a whole lot. Sensory information can be integrated, compacted, and generalized in the human brain, meaning that instead of seeing a picture in pixels, we make generalizations like, “Oh, that’s in the shape of a circle, and it’s on a white background,” instead of “This, this, and this pixel is white, and those three are black… etc… it must be in this shape,” which is how computers interpret that information. 
We learned about resonant frequencies and how the brain interprets them. Auditory signals at various frequencies hit the celia in our ears and vibrate the timpanic membrane, which is translated into electrical signals that travel along neurons (each of which fires according to which frequencies are being produced) that are sent to the brain stem, shot through the thalamus, and finally processed in the primary auditory cortex. Whew. The neuronal responses vary according to whether the sound is natural, sped up, slowed down, or reversed. This response varies according to what species is hearing which sound. If mice hear mice sounds, there are peaks in neuronal activity, but there is no such activity of the sound comes from a marmoset, at least in specific neurons. Basically, there are about 30% of neurons in the primary auditory cortex that actually favor and are activated by ‘natural sounds,’ which do not exhibit the same excitatory properties when presented with other stimuli. In humans, there are many neurons that are specifically sensitive to human speech sounds. This is what allows us to focus on someone who’s talking to us in a noisy room over music or other voices. It’s pretty cool. 
Also, rats sing to each other at supersonic frequencies. It’s kinda weird. 
After our lectures today we did some really, really cool stuff. We went to another lecture sort of thing at the haptics lab near campus. The haptics lab does research on tactile functions of the brain and engineers devices to simulate the sense of touch in computers, robots, and various other mechanical tools. I didn’t have a chance to take notes during the lecture but I did have a chance to take some pictures in the robotics lab, and some of their equipment, if not all of it, was pretty damn impressive. 
This robot cost $400,000+. It does high fives, fist pumps, and house chores. No joke. 

The little blue things are simulated fingers. They have little sensors underneath a rubbery membrane with fluid in between them. By measuring the pressure and conductivity at different points, the computer can measure what it’s touching and the state of the sensors. These just came out last year and the technology is already really impressive, in my opinion. They’re supposedly going to be used for robot hands, so they can ‘feel’ what they’re touching and better measure how hard to hold something.

Now this is a cool machine. And the next picture. You sit on the chair and put your eyes into the viewing glasses, and put your hands on the control mechanisms. By moving your hands and fingers, you can control the instruments (pictured below) to perform various tasks. You can pick up objects, move them around, and push them with a lot of precision. It’s been used for a few years to do surgery accurately and without making a large incision. Now, here’s the coolest part: they have recently added a feature which actually allows you to feel what the instrument would be feeling. It has resistance if you move it where the instrument would not normally be able to move (like if you accidentally scratch the table, you friggin’ feel it!), etc. It’s one of the coolest things I’ve ever seen. 

Similar to the above apparatus, this machine measures the vibrations on an instrument, and replicates the exact same feeling in another, attached metal rod. The application for this is dentistry at the moment. By scratching on a tooth with a cavity (or on a normal tooth) and recording the vibrations as a sound file from within the instruments, the exact same feeling can be replicated anywhere with the machine and another of the special metal rods. In this way, students learning dentistry can feel exactly how it feels to find a cavity as opposed to a healthy tooth just by holding a rod. And it’s almost mind blowing how accurately you feel the vibration. I scratched the table with the regular instrument and held the metal rod in my other hand, and felt the same thing. Whoah. 

The software on the computer is a 3-D modeling program which contains all sorts of information about whatever object has been programmed into it. Here, there’s a layer of gel. The apparatus in the guy’s hand is used to move the ball around on the screen in three dimensions, and it will generate resistance where there is resistance on the screen. So if you try to move the object on the screen with the ball, you will feel the gel bouncing back and generating resistance, even when it’s bobbing up and down. There are several other objects, like a camera, a duck, etc., that were also available in the demo. It was crazy. 
In the same way the dentist tool vibrate-y thing works, this tablet and pen set mimics the vibrations of pre-recorded instruments on various surfaces. By using a measuring instrument to record vibration signals at various applied pressure on various surfaces at various speeds, and subsequently modeling those vibration paradigms in a computer, those signals can be generated in the tablet pen attached to a little metal coil/magnet. You can change what material you’re sampling on the tablet, and then by writing on it, you can actually feel what it’s like to write on that surface. There was also a little button that said “PUSH,” which, when pushed, made it feel like you were breaking the glass on the tablet. But like, legitimately. SO COOL. 
After the robotics lab we went to dinner and then head down to Chinatown with one of the IRCS faculty members. We had some nice Chinese food, saw a rainbow, got some bubble tea, and went home. It was a great evening, and I’m keeping the end of this post short because I’m SO TIRED.



Someone mentioned my shirt matched the stairs and my shorts matched the walls, so I turned upside down for a photo op. 

Leave a Reply

Your email address will not be published. Required fields are marked *