Transcript: Ohzawa Visual Neuroscience Lab Tour 2010
Tour Guide: Izumi Ohzawa, Dec. 16, 2009 - Jan. 7, 2010
- Ohzawa Visual Neuroscience Lab Tour 2010
Graduate School of Frontier Biosciences, Osaka University
I'm Izumi Ohzawa, and this is my lab, and I am going to show you what we are doing here.
We are recording right now..., recording from a single neuron in the visual cortex, and the animal is anesthetized and sitting there watching a CRT screen, a computer display like this.
And right now, we are measuring its responses to different stripe patterns. You may not be able to see individual patterns, but you can see that it's really changing quite fast.
I am going to stop it here, and I'm going to show you how we do it from the beginning.
Let's abort it now.
And here we have a Mac running Windows. And the kind of stimulus that we use initially is a sinusoidal grating pattern like this. You can see it here. It's drifting up and to the left.
And we can change the angle, it's called the orientation, by using a mouse control here. And neurons in the area we are recording from, V1, are very sensitive to the orientation. One neuron is selective to just one orientation.
And we are controlling it, and you can see the change in the real pattern reflected over there.
[spike discharge sound]
So, these are the responses from a single neuron in the visual cortex and it's responding to the visual pattern. These pulses are what's conveying the information between neurons. These are called action potentials.
We use about 4 or 5 different computers to do these experiments. And here's one.
This is the main machine that we use to control the entire experiment. And in addition to that we have a different computer, another computer that is dedicated for generating visual stimuli.
Right now this is in the search mode.
Let's turn this off.
Here are the responses going. So, we dedicate another computer for just generating visual stimuli. And what we have here is a copy of what the animal is looking at. The actual display that the animal is looking at is a little larger than this, but this is for us to look at what the animal is looking at.
And we have another computer that is dedicated for acquiring data, action potential data.
And ... Megumi here is controlling the spike sorter. And this spike sorter is a very old one, a 20-year old machine. [Added note: SpikeCoder - A spike sorter based on NeXT computer. The computer is from 1991, and the software I wrote for it is from 1993.]
And that's the one that I am setting up. This one, this one. Yeah, that's that, which is used for recording from many neurons at the same time using multielectrodes.
On this side, we have .... how many computrs? Two or three computers.... just for analyzing the data.
The data are saved on the disk and then shared by file sharing to these computers, and we can look at the data while the experiment is going on on the other system, the other experiment system.
OK, so let's just stop it and ... So far, I've been controlling the stimulus by hand using the mouse but we can just, control it ..., let the computer control it in a systematic way.
And we have preset canned experiments which we can load from a file, and then adjusting the basic parameters, we can just let the computer test the neuron by showing different patterns and recording all the responses.
And right now, it's pre-computing all the stimuli that it's going to use during the experiment.
And you see a bar growing here ..... Oh, it's just started an experiment. You can see the pattern displayed over there. And it's displaying about 30 stimuli per second and each one of those is a single grating, single stripe pattern. And we are changing the orientation and the spatial frquency, the coarseness, in a very rapid way. And it's randomized but the computer knows what's presented when, so it can reverse back and sort it out, and create a map of responses.
So, let's go back and go to the computers here.
This is the computer which is used for analyzing the data, and here we can see data from previous experiments from this animal.
And we can see that this was recorded at 4 AM this morning. And we are going to look at the data here.
You see a red spot here? That's the response. And in this domain, the rest of the points are black, which means that the cell didn't respond to these paremeters.
The parameters here along the horizontal axis, it's spatial frequency. It's the coarseness or the fineness of the pattern. So, on the right, it's one cycle per degree. It's a fairly high spatioal frequency for this animal.
And the vertical axis shows the orientation of the stimulus. So, it's preferring about 40 degrees of angle or orientaiton, and about 0.2 or 0.3 cycles per degree of the spatial frequency.
And we do a fitting, Gaussian fit, which is shown here. And, so, that's the raw data and these lines are the Gaussian fit. This is the Gaussian fit profile.
So, that's a polar representation of this data.
And here, because it was a complex cell, we can't really look at the receptive field, but we can look at the internal structure of this cell's receptive field.
Receptive field is something that a cell is looking at. Each of these neurons has a very limited visual field, and as you can see.... it's about a few degrees in size in diameter.
And because it's oriented like this, this determines the orientation preference of the particular neuron.
So, this can be computed from the data that was measured in the frequency domain, this, you know..., which stripe pattern it liked, and to what extent these stimuli were effective.
So, these are some of the first measurements that we do when we get a neuron or a group of neurons. And from this point on, we will do more elaborate measurements to find more about these neurons.
And if you are interested in more, please visit our web site which is shown here:
or call me or email me.
[video] +81-6-6850-6520, email@example.com.
[video] Copyright 2010 Izumi Ohzawa, All Rights Reserved
[video] Recorded and Produced: Dec. 16, 2009 ~ Jan. 7, 2010
[I made several grammatical errors when speaking without a script, e.g., is/was, is/are, has/have, etc. These were corrected in the transcript to the extent I could. Additional errors may still remain.]