Latest Post
Showing posts with label NEURO SCIENCE. Show all posts
Showing posts with label NEURO SCIENCE. Show all posts

How an innovative grants program (and Belgian beer mixers) at Johns Hopkins fuels discoveries about the human brain

Written By Unknown on Monday, February 2, 2015 | 2:06 AM



A neuroscientist, an electrical engineer, a surgeon, and an education researcher walk up to a bar.

This could be the start of a joke, or it could be a scene from a recent Science of Learning Institute event at Johns Hopkins University. At the institute's four-times-yearly Belgian Beer Events, scientists from far-flung fields—and often from far-flung parts of the university itself—present their research to each other in short, digestible chunks. Their creativity and conviviality stimulated by a cup of ale or lager, the researchers strike up conversations and form connections that range widely across disciplinary boundaries, from classroom learning to machine learning, from recovery from stroke to memory formation in the brain.

Such conversations can be all too rare at a university where faculty are spread not just across a campus but throughout a large city and beyond. The result, for an inherently interdisciplinary subject like the science of learning, is that projects that could address fundamental and important questions can be hard to conceive and get off the ground. And too often, promising basic research doesn't get translated into the settings where it could help real-world learners.

The Belgian Beer Events, conceived shortly after the institute launched in 2013, are helping change that. They provide an informal space where basic researchers can meet translators, where machine-learning experts can meet early-childhood educators, where cognitive scientists can meet smartphone app developers. The events rotate between locations: October's was at the School of Education, and December's was hosted by the Department of Biomedical Engineering; previous ones were held at the School of Medicine and in Homewood's Levering Hall. Computer scientist Greg Hager likens the events to "an intellectual mixing bowl."

Beyond generating lively conversation, the gatherings are sparking collaborations between researchers who otherwise might never have met. At an event in 2013, neurologist Bonnie Nozari presented her work on speech and language processing disorders. Computer scientist Raman Arora then spoke about his work on machine learning and speech recognition. Recognizing a mutual interest in speech, the two chatted. The next day, they began planning a joint project to see if computers can predict how humans will pronounce words, and then provide feedback to people seeking to learn a new language, or to relearn how to speak after a stroke.

It sounds like a lucky encounter, but in fact electrical engineer Sanjeev Khudanpur, a member of the institute's steering committee, was at work behind the scenes. He conceived the Belgian Beer Events, and he made sure that Arora, his colleague in the Whiting School of Engineering, would be speaking on the same day as Nozari, of the School of Medicine. Later, when the two were ready to apply for funding, Khudanpur encouraged their ultimately successful proposal for one of the institute's research grants. "I see myself as a matchmaker," he says.

"It's that kind of really innovative, different seeding of projects that I think we've done really well," says Barbara Landau, the institute's director and the Dick and Lydia Todd Professor of Cognitive Science in the Krieger School of Arts and Sciences. The institute funded eight projects in 2013 and eight more in 2014, with projects receiving an average of $140,000 spread over two years. Funding goes to hiring graduate students and postdoctoral researchers, developing software, purchasing equipment, and supplying other research needs. The grants are competitive; the review committee has received around 30 proposals a year. The funded projects address a broad range of learning settings, from the classroom to the operating room to distance learning that can take place anywhere. The learners are not limited to humans, either; many of the projects include a strong component of "machine learning"—harnessing computers to recognize patterns in data and use them to develop new human learning applications. Other projects focus on developing animal models that can be used to study human learning.

The grant program allows researchers to get support for projects that might not be quite ready for a proposal to a traditional funding agency like the National Science Foundation or the National Institutes of Health, says Landau. Almost without exception, an NSF or NIH review panel will want to see at least preliminary data demonstrating that an idea is viable. With Science of Learning Institute funding, scientists can do exploratory research that will provide the data needed to support a larger proposal to a more traditional funding agency. "It allows people to do things that they wouldn't necessarily be able to accomplish by a standard grant," says Landau. "The granting agencies tend to be somewhat conservative, and we're looking for innovation."

Like Arora and Nozari's collaboration, many of the funded projects harness existing technological applications to improve learning, often in novel ways. For example, Khudanpur and Hager are working with Gyusung Lee, an instructor of surgery in the School of Medicine, to develop computer software that can help teach surgeons how to use the da Vinci robotic surgical platform. The project grew out of an existing effort called the Language of Surgery, developed by researchers in the Whiting School of Engineering's Laboratory for Computational Sensing and Robotics.

Through this effort, which began in 2006, Hager, Khudanpur, and colleagues program computers to record and analyze the different kinds of movements that surgeons make while performing certain tasks with surgical robots. The researchers' goal was to find movements that could consistently be classified as either expertlike or novicelike. Novice surgeons are more likely to break a suture, for example, or to push or pull on tissue while using the robot to manipulate a surgical needle. The researchers were able to train computer software to recognize such expert and novice movements much as a surgical trainer would.

The next step is to have the assessment tool provide real-time feedback to surgical trainees. With the kind of application the researchers are envisioning, trainees could, in theory, receive an unlimited amount of individualized feedback on what skills they have mastered and where more work is needed. "We're putting the computer in the human learning loop," Khudanpur says. "The computer has certain abilities that are complementary to humans. [For example,] the computer doesn't get tired. The computer usually doesn't charge by the hour."

A few years ago, when the researchers applied for an NIH grant to develop such a learning application, the proposal was rejected because they had no data showing the idea had promise. Thanks to their Science of Learning Institute research award, the scientists are starting to collect that data. Backed by some preliminary results, they recently put in a new NIH proposal and are waiting to hear back.

Meanwhile, thanks to a talk Hager gave last fall, his team's research may soon spawn another effort, which would take Language of Surgery technology out of the operating room and into the classroom. Hager's presentation inspired Landau and Amy Shelton, a professor in the School of Education, who is also on the institute's steering committee, to wonder whether motion-tracking software could recognize the movements that young children make when learning to build toy towers out of blocks. Spatial skills like tower building, in addition to being important in their own right, are of interest to researchers because they often predict children's future abilities in math and other areas. Hager, Landau, and Shelton are now discussing a potential project to put motion sensors on blocks and use computers to track how children acquire manipulation skills, a tactic similar to the one Hager's team uses to assess the skills of aspiring surgeons.

Institute-funded collaborations between computer scientists and education researchers are also reaching far beyond traditional education settings like medical training. In a project funded in 2014, computer scientists Philipp Koehn and Jason Eisner are teaming with Chadia Abras in the School of Education's Center for Technology Education to develop a radically new way to learn a foreign language. The idea is based on macaronic language—a kind of text that mixes two languages into a Spanglish-like hybrid. While such mixing has traditionally been employed by novice speakers or for satirical purposes, Eisner realized that coupled with recent advances in machine translation, it could also help introduce learners to foreign vocabulary and syntax in a gentle and piecemeal way rather than all at once, as in a typical foreign text read laboriously with the aid of a dictionary.

To implement the idea, the researchers are developing software that translates a text progressively, with more and more of the text appearing in the foreign language as the reader's comprehension improves. For an English-to-German learner, for instance, the English phrase "a loaf of bread" could start to appear as "ein Loaf of Bread." When the reader is comfortable with reading the German word "ein" instead of the English "a," the program could progress to "ein Breadloaf," resembling German in syntax but retaining English words. The text would then become "ein Brot loaf," and finally the fully German "ein Brotlaib*." The program will intermittently assess the student's reading comprehension and ability, and tune the amount of foreign language presented to the reader's progress; readers also can direct the program to make the translation easier or harder.

Since the concept still needs to be proved, it makes an ideal Science of Learning Institute project, says Koehn. Eisner adds, "It's a bet that this will work out and will not, for example, confuse people or give them bad habits." The researchers plan to develop an English-to-German application and test it on the Web and in Johns Hopkins classes in combination with more traditional classroom and textbook instruction. If successful, the software could also be made available on the Internet for independent learners.

The project exemplifies how interdisciplinary teams can merge cutting-edge research in machine and human learning, says Kelly Fisher, the institute's assistant director and an assistant professor in the School of Education. "It's a software program that is learning itself, learning about the learner."

Institute-funded research also targets learners far beyond those who are acquiring skills for the first time. Learning is critical for the millions of people who lose skills when they suffer strokes and other neurological conditions and then need to regain them, often through lengthy and complex rehabilitation processes. Research on how to more efficiently relearn lost skills could make a huge difference in how quickly such people can return to work and fully participate in society again.

Cognitive scientist Michael McCloskey recently discovered a new, debilitating, and apparently very rare reading deficit known as alphanumeric visual awareness deficit, or AVAD. McCloskey, a professor in Cognitive Science, identified the condition based on two cases that came to him in one year. One of them, a 61-year-old Baltimore geologist with a neurological disease, could see fine in general, but when looking at letters or numbers, he saw only blurs. McCloskey and his colleagues found, however, that by teaching the patient new characters to use in place of the digits, they could restore his recognition abilities. The researchers developed a smartphone calculator app and modified the geologist's laptop to allow him to do math with the new symbols.

Seeking to build on this work, McCloskey assembled a team of neurologists and cognitive scientists to look for more people with AVAD in order to study the condition using brain imaging and other techniques, and to develop apps and other technology that would help affected people make sense of letters and numbers again. But the researchers have run into a roadblock: They haven't found a single other case of AVAD beyond the original two. A woman in North Carolina who seemed to have the deficit turned out to have a somewhat different condition. "On the one hand, it's interesting that [AVAD is] so rare; on the other hand, it's not what we were hoping for," McCloskey says.

So he and his team have reoriented their project, broadening the scope to include more-common character recognition disorders. For example, some people cannot recognize a number or letter when it is presented to them whole but can recognize a character if they watch it being drawn. Perhaps, says McCloskey, a smartphone app could be developed to read signs and other important text, and draw each character in sequence for people with this deficit. His team is also starting to collaborate with a software developer, MicroBLINK, to make an app that would identify characters and then read the text aloud.

In addition to potentially helping people regain lost abilities, many institute-funded projects such as McCloskey's are aimed at teasing apart the different brain regions and processes responsible for seemingly coherent learned skills like reading. Along these lines but focusing on an entirely different brain function, psychologist Marina Bedny, of the Krieger School's Department of Psychological and Brain Sciences, is heading a team that received an institute grant to study how the brain can retool its hardware when the original purpose of one of its regions is no longer needed. In sighted people, around a quarter of the brain is devoted to visual processing; in blind people, these brain regions get repurposed. How does this work? Bedny wondered.

To investigate this question, she and colleagues in the Krieger School's Department of Cognitive Science and in the Department of Physical Medicine and Rehabilitation at the School of Medicine are combining language comprehension assessments with a technique called transcranial magnetic stimulation, or TMS. They hope to learn whether brain regions normally devoted to sight are needed for language processing in blind people. The researchers recently collected data at a National Federation of the Blind convention and are in the process of testing a control group of sighted people. This effort would have been impossible without a source of support for interdisciplinary projects, Bedny says. "You just can't do this kind of research without an interdisciplinary team because you need so many different kinds of expertise," from linguistics to neuroimaging to TMS. "We really needed the whole team to make it happen."

In another example of institute-funded brain research, neuroscientist David Foster, of the School of Medicine, is taking on perhaps the most basic of all aspects of learning: memory formation. Specifically, Foster is interested in how certain kinds of memories are formed in a brain region called the hippocampus. He has studied this process in detail in rat brains, using dozens of implanted electrodes to precisely record electrical signals as the rats' neurons fire in sequences that represent stored memories. Foster would like to carry out similar studies in humans, but he cannot just go sticking electrodes deep into people's brains. So he first needs to develop less-invasive procedures.

Foster and William Anderson, an associate professor of neurosurgery in the School of Medicine, are now developing such techniques, piggybacking on research that Anderson's group does on epilepsy patients wherein they collect and analyze electrical data gathered from the surface of the brain. By piloting their study on a small sample of patients, the researchers hope to strengthen their position for applying for a larger grant, possibly from the NIH.

Bedny and Foster, both assistant professors, say that institute funding has allowed them to take on projects that might have otherwise been too risky and uncertain for an untenured faculty member. "I probably would not do too much looking outside of my own area to collaborate if I wasn't pushed and incentivized to do so by this kind of mechanism," Foster says. "This allows me, and pays me, to invest in thinking outside of my own small area."

The research grant program is the Science of Learning Institute's first major initiative, and many of the projects from the initial funding round are close to reporting results. The institute plans to continue awarding grants for at least three more years, and possibly more, depending on funding. To assess the program's success, Landau and Fisher are tracking metrics such as publications that awardees produce and external awards that leverage institute-funded work.

The institute also just launched its second big initiative: the Distinguished Science of Learning Fellowship Program. This program will award around five postdoctoral and predoctoral fellowships annually to students wanting to pursue interdisciplinary research in learning. Each fellow will have two advisers from different disciplines.

The fellows also will play a key role in the third prong of the institute's mission: translating and disseminating results beyond academia. Traditionally, much of the learning that occurs in the nation's formal classrooms and more informal settings is not as informed by research as it could be, says Fisher. To help change that, the Science of Learning Institute recently launched partnerships with the Port Discovery Children's Museum in Baltimore and the Children's Museum of Manhattan in New York to develop exhibits that are based on the research into the science of learning. The institute also plans to hire a dissemination expert to help translate research results into classrooms and other learning settings.

The Science of Learning Institute's stated mission is "to understand and optimize the most essential part of our human capital: the ability to learn." The mission makes the institute a crucial catalyst at a university—a place dedicated to learning—where all the pieces are already in place to make major progress on one of the most important scientific questions of our time, says Landau. "One of the goals of the Science of Learning Institute," she says, "is really to sew together the parts of the university that haven't yet interacted—to make it, in President Daniels' words, one university."

Source: JHU

Johns Hopkins student's neuroscience major inspires her first novel

First-time author Marlene Kanmogne IMAGE: MARSHALL CLARKE

One night, Marlene Kanmogne had a dream. She dreamed that she could change reality and shift what was happening around her just by thinking about it.

When Kanmogne awoke, she was overcome by a feeling of power and control, emotion and energy that was so vivid and captivating that she didn't want it to end.

So the junior neuroscience major, who grew up in Cameroon and Oklahoma, decided to write it down. Over the next four years, she transferred the ongoing dream of the girl who could make things happen with her mind into a series of spiral notebooks.

"When I write, it's like there's a film in my head that I transcribe to paper as quickly as I can so that I don't lose the film," Kanmogne says.

It wasn't easy. When school was in session and there was class or volunteering or work with the African Students Association, she was far too busy to write as much as she wanted. But she persevered, completed her story, and this summer even got it published. "I'm a planner—I like to set goals and complete them," she says. "Not finishing was not an option."

The result is The Mind Wanderer (Solstice Press, 2014), a 305-page young adult novel about a 15-year-old girl named Melissa who realizes she has a powerful mind-transforming ability that's much bigger than she is and must adapt to it and accept its consequences.

Kanmogne, 20, hopes to become a physician. She always viewed the book as a personal project. Even after its publication, she has some difficulty accepting notice for being a novelist. Instead, she'd rather discuss the intricacies of the brain. "I do like writing—it's a great hobby of mine. But I think what really drives me is the mind part, the brain part. It all starts and ends with the brain," she says.

Yet, with a year and a half of college, medical school, and residency still ahead, she's not willing to leave her writing hobby behind. The Mind Wanderer, she says, is only book one of three.

"I still have dreams that further the story, and I know what Melissa will do," Kanmogne says. "I see the path of her life, and I want to follow it. And that just keeps picking at me to the point where I have to put it down on paper."

Source: JHU

On the ups and downs of the seemingly idle brain

Written By Unknown on Friday, January 30, 2015 | 5:46 AM

Cortical colors Inhibitory cells abound in the barrel cortex of the mouse, where three main types were labeled to fluoresce in different colors: PV (red), SOM (blue), and 5HT3aR, which includes VIP and NPY, (green). Image: Connors lab/Brown University
Even when it seems not to be doing much, the brain maintains a baseline of activity in the form of up and down states of bustle and quiet. To accomplish this seemingly simple cycle, it maintains a complex balance between the activity of many excitatory and inhibitory cells, Brown University scientists report in the Journal of Neuroscience.

PROVIDENCE, R.I. [Brown University] — Even in its quietest moments, the brain is never “off.” Instead, while under anesthesia, during slow-wave sleep, or even amid calm wakefulness, the brain’s cortex maintains a cycle of activity and quiet called “up” and “down” states. A new study by Brown University neuroscientists probed deep into this somewhat mysterious cycle in mice, to learn more about how the mammalian brain accomplishes it.

In addition to an apparent role in maintaining a baseline of brain activity, the up and down cycling serves as a model for other ways in which activity across the cortex is modulated, said Garrett Neske, graduate student and lead author. To study how the brain maintains this cycling, he found, is to learn how the brain walks a healthy line between excitement and inhibition as it strives to be idle but ready, a bit like a car at a stoplight.
Garrett Neske To study how the brain maintains up and down cycles is to learn how the brain strives to be idle but ready, a bit like a car at a stoplight. Photo: David Orenstein/Brown University
“It is very important to regulate that balance of excitation and inhibition,” said senior author Barry Connors, professor and chair of neuroscience at Brown. “Too much excitation relative to inhibition you get a seizure, too little you become comatose. So whether you are awake and active and processing information or whether you are in some kind of idling state of the brain, you need to maintain that balance.”

The cycling may seem simple, but what Neske and Connors found in their investigation, published in the Journal of Neuroscience, is that it involves a good deal of complexity. They focused on five different types of cells in a particular area of the mouse cortex and found that all five appear to contribute uniquely to the ups and downs.

Cells in a barrel

Specifically the researchers, including Saundra Patrick, neuroscience research associate and second author, looked at the activity of excitatory pyramidal cells and four kinds of inhibitory interneurons (PV, SOM, VIP and NPY) in different layers of the barrel cortex. That part of the cortex is responsible for processing sensations on the face, including the whiskers.

Neske induced up and down cycles in slices of tissue from the barrel cortex and recorded each cell type’s electrical properties and behaviors, such as its firing rate and the amounts of excitation and inhibition they received from other neurons.

The picture that emerged is that all types of interneurons were active. This included the most abundant interneuron subtype (the fast-spiking PV cell), and the various more slowly spiking subtypes (SOM, VIP, NPY). In fact, Connors said, the latter cells were active at levels similar to or higher than neighboring excitatory cells, contributing strong inhibition during the up state.

One way such findings are important is in how they complement recent ones by another research group at Yale University. In that study scientists looked at a different part of the cortex called the entorhinal cortex. There they found that only one inhibitory neuron, PV, seemed to be doing anything in the up state to balance out the excitement of the pyramidal neurons. The other inhibitory neurons stayed virtually silent. In his study, Neske replicated those results.

Taken together, the studies indicate that even though up and down cycles occur throughout the cortex, they may be regulated differently in different parts.

“It suggests that inhibition plays different roles in persistent activity in these two regions of cortex and it calls for more comparative work to be done among cortical areas,” Neske said. “You can’t just use one cortical region as the model for all inhibitory interneuron function.”

From observation to manipulation

Since observing the different behaviors of the neuron types, Neske has moved on to manipulating them to see what role each of them plays. Using the technique of optogenetics, in which the firing of different neuron types can be activated or suppressed with pulses of colored light, Neske is experimenting with squelching different interneurons to see how their enforced abstention affects the up and down cycle.

When the work is done, he should emerge with an even clearer idea of the brain’s intricate and diligent efforts to remain balanced between excitation and inhibition.

The National Institutes of Health (grants NS-050434, MH-086400, and T32NS062443) and the Defense Advanced Research Projects Agency (grant DARPA-BAA-09-27) supported the research.

Source: Brown University

Structure of Neuron-Connecting Synaptic Adhesion Molecules Discovered

Written By Unknown on Wednesday, January 28, 2015 | 9:22 PM

Figure 1: Overview of the PTPd Ig1–3/Slitrk1 LRR1 complex. Copyright : Korea Advanced Institute of Science and Technology
A research team has found the three-dimensional structure of synaptic adhesion molecules, which orchestrate synaptogenesis. The research findings also propose the mechanism of synapses in its initial formation.

Some brain diseases such as obsessive compulsive disorder (OCD) or bipolar disorders arise from a malfunction of synapses. The team expects the findings to be applied in investigating pathogenesis and developing medicines for such diseases.

The research was conducted by a Master’s candidate Kee Hun Kim, Professor Ji Won Um from Yonsei University, and Professor Beom Seok Park from Eulji University under the guidance of Professor Homin Kim from the Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), and Professor Jaewon Ko from Yonsei University. Sponsored by the Ministry of Science, ICT and Future Planning and the National Research Foundation of Korea, the research findings were published online in the November 14th issue of Nature Communications.
Figure 2: Representative negative-stained electron microscopy images of Slitrk1 Full ectodomain (yellow arrows indicate the horseshoe-shaped LRR domains). The typical horseshoe-shaped structures and the randomness of the relative positions of each LRR domain can be observed from the two-dimensional class averages displayed in the orange box. Copyright : Korea Advanced Institute of Science and Technology
A protein that exists in the neuronal transmembrane, Slitrk, interacts with the presynaptic leukocyte common antigen-related receptor protein tyrosine phosphatases (LAR-RPTPs) and forms a protein complex. It is involved in the development of synapses in the initial stage, and balances excitatory and inhibitory signals of neurons.

It is known that a disorder in those two proteins cause a malfunction of synapses, resulting in neuropsychosis such as autism, epilepsy, OCD, and bipolar disorders. However, because the structure as well as synaptogenic function of these proteins were not understood, the development of cures could not progress.

The research team discovered the three-dimensional structure of two synaptic adhesion molecules like Slitrk and LAR-RPTPs and identified the regions of interaction through protein crystallography and transmission electron microscopy (TEM). Furthermore, they found that the formation of the synapse is induced after the combination of two synaptic adhesion molecules develops a cluster.
Figure 3: Model of the two-step presynaptic differentiation process mediated by the biding of Slitrks to LAR-RPTPs and subsequent lateral assembly of trans-synaptic LAR-RPTPs/Slitrik complexes. Copyright : Korea Advanced Institute of Science and Technology
Professor Kim said, “The research findings will serve as a basis of understanding the pathogenesis of brain diseases which arises from a malfunction of synaptic adhesion molecules. In particular, this is a good example in which collaboration between structural biology and neurobiology has led to a fruitful result.” Professor Ko commented that “this will give new directions to synaptic formation-related-researches by revealing the molecular mechanism of synaptic adhesion molecules.”

Source: KAIST

Why Do We Feel Thirst? An Interview with Yuki Oka

Written By Unknown on Tuesday, January 27, 2015 | 6:52 PM

Credit: Lance Hayashida/Caltech Marketing and Communications
To fight dehydration on a hot summer day, you instinctively crave the relief provided by a tall glass of water. But how does your brain sense the need for water, generate the sensation of thirst, and then ultimately turn that signal into a behavioral trigger that leads you to drink water? That's what Yuki Oka, a new assistant professor of biology at Caltech, wants to find out.

Oka's research focuses on the study of how the brain and body work together to maintain a healthy ratio of salt to water as part of a delicate form of biological balance called homeostasis.

Recently, Oka came to Caltech from Columbia University. We spoke with him about his work, his interests outside of the lab, and why he's excited to be joining the faculty at Caltech.

Can you tell us a bit more about your research?

The goal of my research is to understand the mechanisms by which the brain and body cooperate to maintain our internal environment's stability, which is called homeostasis. I'm especially focusing on fluid homeostasis, the fundamental mechanism that regulates the balance of water and salt. When water or salt are depleted in the body, the brain generates a signal that causes either a thirst or a salt craving. And that craving then drives animals to either drink water or eat something salty.

I'd like to know how our brain generates such a specific motivation simply by sensing internal state, and then how that motivation—which is really just neural activity in the brain—goes on to control the behavior.

Why did you choose to study thirst?

After finishing my Ph.D. in Japan, I came to Columbia University where I worked on salt sensing mechanisms in the mammalian taste system. We found that the peripheral taste system has a key function for salt homeostasis in the body by regulating our salt intake behavior. But of course, the peripheral sensor does not work by itself.  It requires a controller, the brain, which uses information from the sensor. So I decided to move on to explore the function of the brain; the real driver of our behaviors.

I was fascinated by thirst because the behavior it generates is very robust and stereotyped across various species. If an animal feels thirst, the behavioral output is simply to drink water. On the other hand, if the brain triggers salt appetite, then the animal specifically looks for salt—nothing else. These direct causal relations make it an ideal system to study the link between the neural circuit and the behavior.

You recently published a paper on this work in the journal Nature. Could you tell us about those findings?

In the paper, we linked specific neural populations in the brain to water drinking behavior. Previous work from other labs suggested that thirst may stem from a part of the brain called the hypothalamus, so we wanted to identify which groups of neurons in the hypothalamus control thirst. Using a technique called optogenetics that can manipulate neural activities with light, we found two distinct populations of neurons that control thirst in two opposite directions. When we activated one of those two populations, it evoked an intense drinking behavior even in fully water-satiated animals. In contrast, activation of a second population drastically suppressed drinking, even in highly water-deprived thirsty animals.  In other words, we could artificially create or erase the desire for drinking water.

Our findings suggest that there is an innate brain circuit that can turn an animal's water-drinking behavior on and off, and that this circuit likely functions as a center for thirst control in the mammalian brain. This work was performed with support from Howard Hughes Medical Institute and National Institutes of Health [for Charles S. Zuker at Columbia University, Oka's former advisor].

You use a mouse model to study thirst, but does this work have applications for humans?

There are many fluid homeostasis-associated conditions; one example is dehydration. We cannot specifically say a direct application for humans since our studies are focused on basic research. But if the same mechanisms and circuits exist in mice and humans, our studies will provide important insights into human physiologies and conditions.

Where did you grow up—and what started your initial interest in science?

I grew up in Japan, close to Tokyo, but not really in the center of the city. It was a nice combination between the big city and nature. There was a big park close to my house and when I was a child, I went there every day and observed plants and animals. That's pretty much how I spent my childhood. My parents are not scientists—neither of them, actually. It was just my innate interest in nature that made me want to be a scientist.

What drew you to Caltech?

I'm really excited about the environment here and the great climate. That's actually not trivial; I think the climate really does affect the people. For example, if you compare Southern California to New York, it's just a totally different character. I came here for a visit last January, and although it was my first time at Caltech I kind of felt a bond. I hadn't even received an offer yet, but I just intuitively thought, "This is probably the place for me."

I'm also looking forward to talking to my colleagues here who use fMRI for human behavioral research. One great advantage about using human subjects in behavioral studies is that they can report back to you about how they feel. There are certainly advantages of using an animal model, like mice. But they cannot report back. We just observe their behavior and say, "They are drinking water, so they must be thirsty." But that is totally different than someone telling you, "I feel thirsty." I believe that combining advantages of animal and human studies should allow us to address important questions about brain functions.

Do you have any hobbies?

I play basketball in my spare time, but my major hobby is collecting fossils. I have some trilobites and, actually, I have a complete set of bones from a type of herbivorous dinosaur. It is being shipped from New York right now and I may put it in my new office.

Written by Jessica Stoller-Conrad


Source: California Institute of Technology

Handheld scanner could make brain tumor removal more complete, reducing recurrence

Written By Unknown on Sunday, January 18, 2015 | 8:18 AM

A handheld device that resembles a laser pointer could someday help surgeons remove all of the cells in a brain tumor. Credit: Moritz Kircher
Cancerous brain tumors are notorious for growing back despite surgical attempts to remove them -- and for leading to a dire prognosis for patients. But scientists are developing a new way to try to root out malignant cells during surgery so fewer or none get left behind to form new tumors. The method, reported in the journal ACS Nano, could someday vastly improve the outlook for patients.

Moritz F. Kircher and colleagues at Memorial Sloan Kettering Cancer Center point out that malignant brain tumors, particularly the kind known as glioblastoma multiforme (GBM), are among the toughest to beat. Although relatively rare, GBM is highly aggressive, and its cells multiply rapidly. Surgical removal is one of the main weapons doctors have to treat brain tumors. The problem is that currently, there's no way to know if they have taken out all of the cancerous cells. And removing extra material "just in case" isn't a good option in the brain, which controls so many critical processes. The techniques surgeons have at their disposal today are not accurate enough to identify all the cells that need to be excised. So Kircher's team decided to develop a new method to fill that gap.

The researchers used a handheld device resembling a laser pointer that can detect "Raman nanoprobes" with very high accuracy. These nanoprobes are injected the day prior to the operation and go specifically to tumor cells, and not to normal brain cells. Using a handheld Raman scanner in a mouse model that mimics human GBM, the researchers successfully identified and removed all malignant cells in the rodents' brains. Also, because the technique involves steps that have already made it to human testing for other purposes, the researchers conclude that it has the potential to move readily into clinical trials. Surgeons might be able to use the device in the future to treat other types of brain cancer, they say.

The authors acknowledge funding from the National Institutes of Health.

Heart drug may help treat ALS, mouse study shows

Written By Unknown on Friday, January 16, 2015 | 4:32 AM

In the top image, cells from a mouse model of amyotrophic lateral sclerosis caused normal healthy brain cells (green) to die. But when scientists blocked an enzyme in the cells from the mouse model, more of the normal cells and their branches survived (bottom). Credit: Nature Neuroscience
Digoxin, a medication used in the treatment of heart failure, may be adaptable for the treatment of amyotrophic lateral sclerosis (ALS), a progressive, paralyzing disease, suggests new research at Washington University School of Medicine in St. Louis.

ALS, also known as Lou Gehrig's disease, destroys the nerve cells that control muscles. This leads to loss of mobility, difficulty breathing and swallowing and eventually death. Riluzole, the sole medication approved to treat the disease, has only marginal benefits in patients.
But in a new study conducted in cell cultures and in mice, scientists showed that when they reduced the activity of an enzyme or limited cells' ability to make copies of the enzyme, the disease's destruction of nerve cells stopped. The enzyme maintains the proper balance of sodium and potassium in cells.

"We blocked the enzyme with digoxin," said senior author Azad Bonni, MD, PhD. "This had a very strong effect, preventing the death of nerve cells that are normally killed in a cell culture model of ALS."

The findings appear online Oct. 26 in Nature Neuroscience.

The results stemmed from Bonni's studies of brain cells' stress responses in a mouse model of ALS. The mice have a mutated version of a gene that causes an inherited form of the disease and develop many of the same symptoms seen in humans with ALS, including paralysis and death.

Efforts to monitor the activity of a stress response protein in the mice unexpectedly led the scientists to another protein: sodium-potassium ATPase. This enzyme ejects charged sodium particles from cells and takes in charged potassium particles, allowing cells to maintain an electrical charge across their outer membranes.

Maintenance of this charge is essential for the normal function of cells. The particular sodium-potassium ATPase highlighted by Bonni's studies is found in nervous system cells called astrocytes. In the ALS mice, levels of the enzyme are higher than normal in astrocytes.

Bonni's group found that the increase in sodium-potassium ATPase led the astrocytes to release harmful factors called inflammatory cytokines, which may kill motor neurons.

Recent studies have suggested that astrocytes may be crucial contributors to neurodegenerative disorders such as ALS, and Alzheimer's, Huntington's and Parkinson's diseases. For example, placing astrocytes from ALS mice in culture dishes with healthy motor neurons causes the neurons to degenerate and die.

"Even though the neurons are normal, there's something going on in the astrocytes that is harming the neurons," said Bonni, the Edison Professor of Neurobiology and head of the Department of Anatomy and Neurobiology.

How this happens isn't clear, but Bonni's results suggest the sodium-potassium ATPase plays a key role. When he conducted the same experiment but blocked the enzyme in ALS astrocytes using digoxin, the normal motor nerve cells survived. Digoxin blocks the ability of sodium-potassium ATPase to eject sodium and bring in potassium.

In mice with the mutation for inherited ALS, those with only one copy of the gene for sodium-potassium ATPase survived an average of 20 days longer than those with two copies of the gene. When one copy of the gene is gone, cells make less of the enzyme.

"The mice with only one copy of the sodium-potassium ATPase gene live longer and are more mobile," Bonni said. "They're not normal, but they can walk around and have more motor neurons in their spinal cords."

Many important questions remain about whether and how inhibitors of the sodium-potassium ATPase enzyme might be used to slow progressive paralysis in ALS, but Bonni said the findings offer an exciting starting point for further studies.

Existing drug, riluzole, may prevent foggy 'old age' brain, research shows

Better memory makers: When researchers looked at certain neurons (similar to the one shown on top) in rats treated with riluzole, they found an important change in one brain region, the hippocampus: more clusters of so-called spines, receiving connections that extend from the branches of a neuron (bottom). Credit: Image courtesy of Rockefeller University
Forgetfulness, it turns out, is all in the head. Scientists have shown that fading memory and clouding judgment, the type that comes with advancing age, show up as lost and altered connections between neurons in the brain. But new experiments suggest an existing drug, known as riluzole and already on the market as a treatment for ALS, may help prevent these changes.

Researchers at The Rockefeller University and The Icahn School of Medicine at Mount Sinai found they could stop normal, age-related memory loss in rats by treating them with riluzole. This treatment, they found, prompted changes known to improve connections, and as a result, communication, between certain neurons within the brain's hippocampus.

"By examining the neurological changes that occurred after riluzole treatment, we discovered one way in which the brain's ability to reorganize itself -- its neuroplasticity -- can be marshaled to protect it against some of the deterioration that can accompany old age, at least in rodents," says co-senior study author Alfred E. Mirsky Professor Bruce McEwen, head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology. The research is published this week in Proceedings of the National Academy of Sciences.

Neurons connect to one another to form circuits connecting certain parts of the brain, and they communicate using a chemical signal known as glutamate. But too much glutamate can cause damage; excess can spill out and excite connecting neurons in the wrong spot. In the case of age-related cognitive decline, this process damages neurons at the points where they connect -- their synapses. In neurodegenerative disorders, such as Alzheimer's disease, this contributes to the death of neurons.

Used to slow the progress of another neurodegenerative condition, ALS (also known as Lou Gehrig's disease), riluzole was an obvious choice as a potential treatment, because it works by helping to control glutamate release and uptake, preventing harmful spillover. The researchers began giving riluzole to rats once they reached 10 months old, the rat equivalent of middle age, when their cognitive decline typically begins.

After 17 weeks of treatment, the researchers tested the rats' spatial memory -- the type of memory most readily studied in animals -- and found they performed better than their untreated peers, and almost as well as young rats. For instance, when placed in a maze they had already explored, the treated rats recognized an unfamiliar arm as such and spent more time investigating it.

When the researchers looked inside the brains of riluzole-treated rats, they found telling changes to the vulnerable glutamate sensing circuitry within the hippocampus, a brain region implicated in memory and emotion.

"We have found that in many cases, aging involves synaptic changes that decrease synaptic strength, the plasticity of synapses, or both," said John Morrison, professor of neuroscience and the Friedman Brain Institute and dean of basic sciences and the Graduate School of Biomedical Sciences at Mount Sinai. "The fact that riluzole increased the clustering of only the thin, most plastic spines, suggests that its enhancement of memory results from both an increase in synaptic strength and synaptic plasticity, which might explain its therapeutic effectiveness."

In this case, the clusters involved thin spines, a rapidly adaptable type of spine. The riluzole-treated animals had more clustering than the young animals and their untreated peers, who had the least. This discovery led the researchers to speculate that, in general, the aged brain may compensate by increasing clustering. Riluzole appears to enhance this mechanism.

"In our study, this phenomenon of clustering proved to be the core underlying mechanism that prevented age-related cognitive decline. By compensating the deleterious changes in glutamate levels with aging and Alzheimer's disease and promoting important neuroplastic changes in the brain, such as clustering of spines, riluzole may prevent cognitive decline," says first author Ana Pereira, an instructor in clinical investigation in McEwen's laboratory.

Taking advantage of the overlap of neural circuits vulnerable to age-related cognitive decline and Alzheimer's disease, Pereira is currently conducting a clinical trial to test the effectiveness of riluzole for patients with mild Alzheimer's.

How does the brain react to virtual reality? Completely different pattern of activity in brain

Written By Unknown on Thursday, January 8, 2015 | 3:27 AM

Illusions (stock image). UCLA neurophysicists have found that space-mapping neurons in the brain react differently to virtual reality than they do to real-world environments. Credit: © agsandrew / Fotolia
UCLA neurophysicists have found that space-mapping neurons in the brain react differently to virtual reality than they do to real-world environments. Their findings could be significant for people who use virtual reality for gaming, military, commercial, scientific or other purposes.

"The pattern of activity in a brain region involved in spatial learning in the virtual world is completely different than when it processes activity in the real world," said Mayank Mehta, a UCLA professor of physics, neurology and neurobiology in the UCLA College and the study's senior author. "Since so many people are using virtual reality, it is important to understand why there are such big differences."

The study was published today in the journal Nature Neuroscience.

The scientists were studying the hippocampus, a region of the brain involved in diseases such as Alzheimer's, stroke, depression, schizophrenia, epilepsy and post-traumatic stress disorder. The hippocampus also plays an important role in forming new memories and creating mental maps of space. For example, when a person explores a room, hippocampal neurons become selectively active, providing a "cognitive map" of the environment.

The mechanisms by which the brain makes those cognitive maps remains a mystery, but neuroscientists have surmised that the hippocampus computes distances between the subject and surrounding landmarks, such as buildings and mountains. But in a real maze, other cues, such as smells and sounds, can also help the brain determine spaces and distances.

To test whether the hippocampus could actually form spatial maps using only visual landmarks, Mehta's team devised a noninvasive virtual reality environment and studied how the hippocampal neurons in the brains of rats reacted in the virtual world without the ability to use smells and sounds as cues.

Researchers placed a small harness around rats and put them on a treadmill surrounded by a "virtual world" on large video screens -- a virtual environment they describe as even more immersive than IMAX -- in an otherwise dark, quiet room. The scientists measured the rats' behavior and the activity of hundreds of neurons in their hippocampi, said UCLA graduate student Lavanya Acharya, a lead author on the research.

The researchers also measured the rats' behavior and neural activity when they walked in a real room designed to look exactly like the virtual reality room.

The scientists were surprised to find that the results from the virtual and real environments were entirely different. In the virtual world, the rats' hippocampal neurons seemed to fire completely randomly, as if the neurons had no idea where the rat was -- even though the rats seemed to behave perfectly normally in the real and virtual worlds.

"The 'map' disappeared completely," said Mehta, director of a W.M. Keck Foundation Neurophysics center and a member of UCLA's Brain Research Institute. "Nobody expected this. The neuron activity was a random function of the rat's position in the virtual world."

Explained Zahra Aghajan, a UCLA graduate student and another of the study's lead authors: 

"In fact, careful mathematical analysis showed that neurons in the virtual world were calculating the amount of distance the rat had walked, regardless of where he was in the virtual space."

They also were shocked to find that although the rats' hippocampal neurons were highly active in the real-world environment, more than half of those neurons shut down in the virtual space.

The virtual world used in the study was very similar to virtual reality environments used by humans, and neurons in a rat's brain would be very hard to distinguish from neurons in the human brain, Mehta said.

His conclusion: "The neural pattern in virtual reality is substantially different from the activity pattern in the real world. We need to fully understand how virtual reality affects the brain."

Neurons Bach would appreciate

In addition to analyzing the activity of individual neurons, Mehta's team studied larger groups of the brain cells. Previous research, including studies by his group, have revealed that groups of neurons create a complex pattern using brain rhythms.

"These complex rhythms are crucial for learning and memory, but we can't hear or feel these rhythms in our brain. They are hidden under the hood from us," Mehta said. "The complex pattern they make defies human imagination. The neurons in this memory-making region talk to each other using two entirely different languages at the same time. One of those languages is based on rhythm; the other is based on intensity."

Every neuron in the hippocampus speaks the two languages simultaneously, Mehta said, comparing the phenomenon to the multiple concurrent melodies of a Bach fugue.

Mehta's group reports that in the virtual world, the language based on rhythm has a similar structure to that in the real world, even though it says something entirely different in the two worlds. The language based on intensity, however, is entirely disrupted.

When people walk or try to remember something, the activity in the hippocampus becomes very rhythmic and these complex, rhythmic patterns appear, Mehta said. Those rhythms facilitate the formation of memories and our ability to recall them. Mehta hypothesizes that in some people with learning and memory disorders, these rhythms are impaired.

"Neurons involved in memory interact with other parts of the hippocampus like an orchestra," Mehta said. "It's not enough for every violinist and every trumpet player to play their music flawlessly. They also have to be perfectly synchronized."

Mehta believes that by retuning and synchronizing these rhythms, doctors will be able to repair damaged memory, but said doing so remains a huge challenge.

"The need to repair memories is enormous," noted Mehta, who said neurons and synapses -- the connections between neurons -- are amazingly complex machines.

Previous research by Mehta showed that the hippocampal circuit rapidly evolves with learning and that brain rhythms are crucial for this process. Mehta conducts his research with rats because analyzing complex brain circuits and neural activity with high precision currently is not possible in humans.

Other co-authors of the study were Jason Moore, a UCLA graduate student; Cliff Vuong, a research assistant who conducted the research as a UCLA undergraduate; and UCLA postdoctoral scholar Jesse Cushman. The research was funded by the W.M. Keck Foundation and the National Institutes of Health.

Source: University of California - Los Angeles

Software models more detailed evolutionary networks from genetic data

Written By Unknown on Wednesday, January 7, 2015 | 11:07 PM

Phylogenetic networks depict the movement of genetic sequences from one species to another as a means of showing where horizontal gene transfer may have taken place. Software by scientists at Rice University aims to reveal far more about species’ evolutionary histories than traditional tree models are able to. Credit: Luay Nakhleh/Rice University
The tree has been an effective model of evolution for 150 years, but a Rice University computer scientist believes it's far too simple to illustrate the breadth of current knowledge.

Rice researcher Luay Nakhleh and his group have developed PhyloNet, an open-source software package that accounts for horizontal as well as vertical inheritance of genetic material among genomes. His "maximum likelihood" method, detailed this month in the Proceedings of the National Academy of Sciences, allows PhyloNet to infer network models that better describe the evolution of certain groups of species than do tree models.

"Inferring" in this case means analyzing genes to determine their evolutionary history with the highest probability -- the maximum likelihood -- of connections between species. Nakhleh and Rice colleague Christopher Jermaine recently won a $1.1 million National Science Foundation grant to analyze evolutionary patterns using Bayesian inference, a statistics-based technique to estimate probabilities based on a data set.

To build networks that account for all of the genetic connections between species, the software infers the probability of variations that phylogenetic trees can't illustrate, such as horizontal gene transfers. These transfers circumvent simple parent-to-offspring evolution and allow genetic variations to move from one species to another by means other than reproduction.

Biologists want to know when and how these transfers happened, but tree structures conceal such information. "When horizontal transfer occurs, as with the hybridization of two species, the tree model becomes inadequate to describe the evolutionary history, and networks that incorporate horizontal gene transfer become the more appropriate model," Nakhleh said.

Nakhleh's Java-based software accounts for incomplete lineage sorting, in which clues to gene evolution that don't match the established lineage of species appear in the genetic record.

"We are the first group to develop a general model that will allow biologists to estimate hybridization while accounting for all these complexities in evolution," Nakhleh said.
Most existing programs for phylogenetics (the study of evolutionary relationships) ignore such complexities. "They end up overestimating the amount of hybridization," Nakhleh said. 
"They start seeing lots of complexities in the data and say, 'Oh, it's complex here; it must be hybridization,' and end up inferring too much. Our method acknowledges that part of the complexity has nothing to do with hybridization; it has to do with other random processes that happened during evolution."

The Rice researchers used two data sets to test the new program. One, a computer-generated set of data that mimics a realistic model of evolution, allowed them to evaluate the accuracy of the program. The second involved multiple genomes of mice found across Europe and Asia. "There have been stories about mice hybridizing," Nakhleh said. "Now that we have the first method to allow for systematic analysis, we ran it on a very large amount of data from five mouse samples and we detected hybridization" -- most notably in the presence of a genetic signal from a mouse in Kazakhstan that found its way to mice in France and Germany, he said.

Nakhleh hopes evolutionary biologists will use PhyloNet to take a fresh look at the massive amount of genomic data collected over the past few decades. "The exciting thing for me about this is that biologists can now systematically go through lots of data they have generated and check to see if there has been hybridization."

That smartphone is giving your thumbs superpowers

Written By Unknown on Tuesday, January 6, 2015 | 10:41 PM

While neuroscientists have long studied brain plasticity in expert groups--musicians or video gamers, for instance--smartphones present an opportunity to understand how regular life shapes the brains of regular people. Credit: © Antonioguillem / Fotolia
When people spend time interacting with their smartphones via touchscreen, it actually changes the way their thumbs and brains work together, according to a report in the Cell Press journal Current Biology on December 23. More touchscreen use in the recent past translates directly into greater brain activity when the thumbs and other fingertips are touched, the study shows.

"I was really surprised by the scale of the changes introduced by the use of smartphones," says Arko Ghosh of the University of Zurich and ETH Zurich in Switzerland. "I was also struck by how much of the inter-individual variations in the fingertip-associated brain signals could be simply explained by evaluating the smartphone logs."

It all started when Ghosh and his colleagues realized that our newfound obsession with smartphones could be a grand opportunity to explore the everyday plasticity of the human brain. Not only are people suddenly using their fingertips, and especially their thumbs, in a new way, but many of us are also doing it an awful lot, day after day. Not only that, but our phones are also keeping track of our digital histories to provide a readymade source of data on those behaviors.

Ghosh explains it this way: "I think first we must appreciate how common personal digital devices are and how densely people use them. What this means for us neuroscientists is that the digital history we carry in our pockets has an enormous amount of information on how we use our fingertips (and more)."

While neuroscientists have long studied brain plasticity in expert groups--musicians or video gamers, for instance--smartphones present an opportunity to understand how regular life shapes the brains of regular people.

To link digital footprints to brain activity in the new study, Ghosh and his team used electroencephalography (EEG) to record the brain response to mechanical touch on the thumb, index, and middle fingertips of touchscreen phone users in comparison to people who still haven't given up their old-school mobile phones.

The researchers found that the electrical activity in the brains of smartphone users was 
enhanced when all three fingertips were touched. In fact, the amount of activity in the cortex of the brain associated with the thumb and index fingertips was directly proportional to the intensity of phone use, as quantified by built-in battery logs. The thumb tip was even sensitive to day-to-day fluctuations: the shorter the time elapsed from an episode of intense phone use, the researchers report, the larger was the cortical potential associated with it.

The results suggest to the researchers that repetitive movements over the smooth touchscreen surface reshape sensory processing from the hand, with daily updates in the brain's representation of the fingertips. And that leads to a pretty remarkable idea: "We propose that cortical sensory processing in the contemporary brain is continuously shaped by personal digital technology," Ghosh and his colleagues write.

What exactly this influence of digital technology means for us in other areas of our lives is a question for another day. The news might not be so good, Ghosh and colleagues say, noting evidence linking excessive phone use with motor dysfunctions and pain.

Source: Cell Press

In search of the origin of our brain

Written By Unknown on Thursday, December 25, 2014 | 3:43 AM

Nervous system in Nematostella vectensis embryos with different nerve cell populations, where the different neurons (here in green, blue and magenta) evidence asymmetry. Credit: Hiroshi Watanabe, Thomas Holstein / Nature Communication 5:5536, Macmillan Publishers Limited
While searching for the origin of our brain, biologists at Heidelberg University have gained new insights into the evolution of the central nervous system (CNS) and its highly developed biological structures. The researchers analysed neurogenesis at the molecular level in the model organism Nematostella vectensis. Using certain genes and signal factors, the team led by Prof. Dr. Thomas Holstein of the Centre for Organismal Studies demonstrated how the origin of nerve cell centralization can be traced back to the diffuse nerve net of simple and original lower animals like the sea anemone. The results of their research will be published in the journal "Nature Communications."

Like corals and jellyfish, the sea anemone -- Nematostella vectensis -- is a member of the Cnidaria family, which is over 700 million years old. It has a simple sack-like body, with no skeleton and just one body orifice. The nervous system of this original multicellular animal is organised in an elementary nerve net that is already capable of simple behaviour patterns. Researchers previously assumed that this net did not evidence centralization, that is, no local concentration of nerve cells. In the course of their research, however, the scientists discovered that the nerve net of the embryonic sea anemone is formed by a set of neuronal genes and signal factors that are also found in vertebrates.

According to Prof. Holstein, the origin of the first nerve cells depends on the Wnt signal pathway, named for its signal protein, Wnt. It plays a pivotal role in the orderly evolution of different types of animal cells. The Heidelberg researchers also uncovered an initial indication that another signal path is active in the neurogenesis of sea anemones -- the BMP pathway, which is instrumental for the centralization of nerve cells in vertebrates.

Named after the BMP signal protein, this pathway controls the evolution of various cell types depending on the protein concentration, similar to the Wnt pathway, but in a different direction. The BMP pathway runs at a right angle to the Wnt pathway, thereby creating an asymmetrical pattern of neuronal cell types in the widely diffuse neuronal net of the sea anemone. "This can be considered as the birth of centralization of the neuronal network on the path to the complex brains of vertebrates," underscores Prof. Holstein.

While the Wnt signal path triggers the formation of the primary body axis of all animals, from sponges to vertebrates, the BMP signal pathway is also involved in the formation of the secondary body axis (back and abdomen) in advanced vertebrates. "Our research results indicate that the origin of a central nervous system is closely linked to the evolution of the body axes," explains Prof. Holstein.

 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. The planet wall - All Rights Reserved
Template Created by Easy Blogging Published by Mas Template
Proudly powered by Blogger