Latest Post
Showing posts with label MIND & BRAIN. Show all posts
Showing posts with label MIND & BRAIN. Show all posts

How an innovative grants program (and Belgian beer mixers) at Johns Hopkins fuels discoveries about the human brain

Written By Unknown on Monday, February 2, 2015 | 2:06 AM



A neuroscientist, an electrical engineer, a surgeon, and an education researcher walk up to a bar.

This could be the start of a joke, or it could be a scene from a recent Science of Learning Institute event at Johns Hopkins University. At the institute's four-times-yearly Belgian Beer Events, scientists from far-flung fields—and often from far-flung parts of the university itself—present their research to each other in short, digestible chunks. Their creativity and conviviality stimulated by a cup of ale or lager, the researchers strike up conversations and form connections that range widely across disciplinary boundaries, from classroom learning to machine learning, from recovery from stroke to memory formation in the brain.

Such conversations can be all too rare at a university where faculty are spread not just across a campus but throughout a large city and beyond. The result, for an inherently interdisciplinary subject like the science of learning, is that projects that could address fundamental and important questions can be hard to conceive and get off the ground. And too often, promising basic research doesn't get translated into the settings where it could help real-world learners.

The Belgian Beer Events, conceived shortly after the institute launched in 2013, are helping change that. They provide an informal space where basic researchers can meet translators, where machine-learning experts can meet early-childhood educators, where cognitive scientists can meet smartphone app developers. The events rotate between locations: October's was at the School of Education, and December's was hosted by the Department of Biomedical Engineering; previous ones were held at the School of Medicine and in Homewood's Levering Hall. Computer scientist Greg Hager likens the events to "an intellectual mixing bowl."

Beyond generating lively conversation, the gatherings are sparking collaborations between researchers who otherwise might never have met. At an event in 2013, neurologist Bonnie Nozari presented her work on speech and language processing disorders. Computer scientist Raman Arora then spoke about his work on machine learning and speech recognition. Recognizing a mutual interest in speech, the two chatted. The next day, they began planning a joint project to see if computers can predict how humans will pronounce words, and then provide feedback to people seeking to learn a new language, or to relearn how to speak after a stroke.

It sounds like a lucky encounter, but in fact electrical engineer Sanjeev Khudanpur, a member of the institute's steering committee, was at work behind the scenes. He conceived the Belgian Beer Events, and he made sure that Arora, his colleague in the Whiting School of Engineering, would be speaking on the same day as Nozari, of the School of Medicine. Later, when the two were ready to apply for funding, Khudanpur encouraged their ultimately successful proposal for one of the institute's research grants. "I see myself as a matchmaker," he says.

"It's that kind of really innovative, different seeding of projects that I think we've done really well," says Barbara Landau, the institute's director and the Dick and Lydia Todd Professor of Cognitive Science in the Krieger School of Arts and Sciences. The institute funded eight projects in 2013 and eight more in 2014, with projects receiving an average of $140,000 spread over two years. Funding goes to hiring graduate students and postdoctoral researchers, developing software, purchasing equipment, and supplying other research needs. The grants are competitive; the review committee has received around 30 proposals a year. The funded projects address a broad range of learning settings, from the classroom to the operating room to distance learning that can take place anywhere. The learners are not limited to humans, either; many of the projects include a strong component of "machine learning"—harnessing computers to recognize patterns in data and use them to develop new human learning applications. Other projects focus on developing animal models that can be used to study human learning.

The grant program allows researchers to get support for projects that might not be quite ready for a proposal to a traditional funding agency like the National Science Foundation or the National Institutes of Health, says Landau. Almost without exception, an NSF or NIH review panel will want to see at least preliminary data demonstrating that an idea is viable. With Science of Learning Institute funding, scientists can do exploratory research that will provide the data needed to support a larger proposal to a more traditional funding agency. "It allows people to do things that they wouldn't necessarily be able to accomplish by a standard grant," says Landau. "The granting agencies tend to be somewhat conservative, and we're looking for innovation."

Like Arora and Nozari's collaboration, many of the funded projects harness existing technological applications to improve learning, often in novel ways. For example, Khudanpur and Hager are working with Gyusung Lee, an instructor of surgery in the School of Medicine, to develop computer software that can help teach surgeons how to use the da Vinci robotic surgical platform. The project grew out of an existing effort called the Language of Surgery, developed by researchers in the Whiting School of Engineering's Laboratory for Computational Sensing and Robotics.

Through this effort, which began in 2006, Hager, Khudanpur, and colleagues program computers to record and analyze the different kinds of movements that surgeons make while performing certain tasks with surgical robots. The researchers' goal was to find movements that could consistently be classified as either expertlike or novicelike. Novice surgeons are more likely to break a suture, for example, or to push or pull on tissue while using the robot to manipulate a surgical needle. The researchers were able to train computer software to recognize such expert and novice movements much as a surgical trainer would.

The next step is to have the assessment tool provide real-time feedback to surgical trainees. With the kind of application the researchers are envisioning, trainees could, in theory, receive an unlimited amount of individualized feedback on what skills they have mastered and where more work is needed. "We're putting the computer in the human learning loop," Khudanpur says. "The computer has certain abilities that are complementary to humans. [For example,] the computer doesn't get tired. The computer usually doesn't charge by the hour."

A few years ago, when the researchers applied for an NIH grant to develop such a learning application, the proposal was rejected because they had no data showing the idea had promise. Thanks to their Science of Learning Institute research award, the scientists are starting to collect that data. Backed by some preliminary results, they recently put in a new NIH proposal and are waiting to hear back.

Meanwhile, thanks to a talk Hager gave last fall, his team's research may soon spawn another effort, which would take Language of Surgery technology out of the operating room and into the classroom. Hager's presentation inspired Landau and Amy Shelton, a professor in the School of Education, who is also on the institute's steering committee, to wonder whether motion-tracking software could recognize the movements that young children make when learning to build toy towers out of blocks. Spatial skills like tower building, in addition to being important in their own right, are of interest to researchers because they often predict children's future abilities in math and other areas. Hager, Landau, and Shelton are now discussing a potential project to put motion sensors on blocks and use computers to track how children acquire manipulation skills, a tactic similar to the one Hager's team uses to assess the skills of aspiring surgeons.

Institute-funded collaborations between computer scientists and education researchers are also reaching far beyond traditional education settings like medical training. In a project funded in 2014, computer scientists Philipp Koehn and Jason Eisner are teaming with Chadia Abras in the School of Education's Center for Technology Education to develop a radically new way to learn a foreign language. The idea is based on macaronic language—a kind of text that mixes two languages into a Spanglish-like hybrid. While such mixing has traditionally been employed by novice speakers or for satirical purposes, Eisner realized that coupled with recent advances in machine translation, it could also help introduce learners to foreign vocabulary and syntax in a gentle and piecemeal way rather than all at once, as in a typical foreign text read laboriously with the aid of a dictionary.

To implement the idea, the researchers are developing software that translates a text progressively, with more and more of the text appearing in the foreign language as the reader's comprehension improves. For an English-to-German learner, for instance, the English phrase "a loaf of bread" could start to appear as "ein Loaf of Bread." When the reader is comfortable with reading the German word "ein" instead of the English "a," the program could progress to "ein Breadloaf," resembling German in syntax but retaining English words. The text would then become "ein Brot loaf," and finally the fully German "ein Brotlaib*." The program will intermittently assess the student's reading comprehension and ability, and tune the amount of foreign language presented to the reader's progress; readers also can direct the program to make the translation easier or harder.

Since the concept still needs to be proved, it makes an ideal Science of Learning Institute project, says Koehn. Eisner adds, "It's a bet that this will work out and will not, for example, confuse people or give them bad habits." The researchers plan to develop an English-to-German application and test it on the Web and in Johns Hopkins classes in combination with more traditional classroom and textbook instruction. If successful, the software could also be made available on the Internet for independent learners.

The project exemplifies how interdisciplinary teams can merge cutting-edge research in machine and human learning, says Kelly Fisher, the institute's assistant director and an assistant professor in the School of Education. "It's a software program that is learning itself, learning about the learner."

Institute-funded research also targets learners far beyond those who are acquiring skills for the first time. Learning is critical for the millions of people who lose skills when they suffer strokes and other neurological conditions and then need to regain them, often through lengthy and complex rehabilitation processes. Research on how to more efficiently relearn lost skills could make a huge difference in how quickly such people can return to work and fully participate in society again.

Cognitive scientist Michael McCloskey recently discovered a new, debilitating, and apparently very rare reading deficit known as alphanumeric visual awareness deficit, or AVAD. McCloskey, a professor in Cognitive Science, identified the condition based on two cases that came to him in one year. One of them, a 61-year-old Baltimore geologist with a neurological disease, could see fine in general, but when looking at letters or numbers, he saw only blurs. McCloskey and his colleagues found, however, that by teaching the patient new characters to use in place of the digits, they could restore his recognition abilities. The researchers developed a smartphone calculator app and modified the geologist's laptop to allow him to do math with the new symbols.

Seeking to build on this work, McCloskey assembled a team of neurologists and cognitive scientists to look for more people with AVAD in order to study the condition using brain imaging and other techniques, and to develop apps and other technology that would help affected people make sense of letters and numbers again. But the researchers have run into a roadblock: They haven't found a single other case of AVAD beyond the original two. A woman in North Carolina who seemed to have the deficit turned out to have a somewhat different condition. "On the one hand, it's interesting that [AVAD is] so rare; on the other hand, it's not what we were hoping for," McCloskey says.

So he and his team have reoriented their project, broadening the scope to include more-common character recognition disorders. For example, some people cannot recognize a number or letter when it is presented to them whole but can recognize a character if they watch it being drawn. Perhaps, says McCloskey, a smartphone app could be developed to read signs and other important text, and draw each character in sequence for people with this deficit. His team is also starting to collaborate with a software developer, MicroBLINK, to make an app that would identify characters and then read the text aloud.

In addition to potentially helping people regain lost abilities, many institute-funded projects such as McCloskey's are aimed at teasing apart the different brain regions and processes responsible for seemingly coherent learned skills like reading. Along these lines but focusing on an entirely different brain function, psychologist Marina Bedny, of the Krieger School's Department of Psychological and Brain Sciences, is heading a team that received an institute grant to study how the brain can retool its hardware when the original purpose of one of its regions is no longer needed. In sighted people, around a quarter of the brain is devoted to visual processing; in blind people, these brain regions get repurposed. How does this work? Bedny wondered.

To investigate this question, she and colleagues in the Krieger School's Department of Cognitive Science and in the Department of Physical Medicine and Rehabilitation at the School of Medicine are combining language comprehension assessments with a technique called transcranial magnetic stimulation, or TMS. They hope to learn whether brain regions normally devoted to sight are needed for language processing in blind people. The researchers recently collected data at a National Federation of the Blind convention and are in the process of testing a control group of sighted people. This effort would have been impossible without a source of support for interdisciplinary projects, Bedny says. "You just can't do this kind of research without an interdisciplinary team because you need so many different kinds of expertise," from linguistics to neuroimaging to TMS. "We really needed the whole team to make it happen."

In another example of institute-funded brain research, neuroscientist David Foster, of the School of Medicine, is taking on perhaps the most basic of all aspects of learning: memory formation. Specifically, Foster is interested in how certain kinds of memories are formed in a brain region called the hippocampus. He has studied this process in detail in rat brains, using dozens of implanted electrodes to precisely record electrical signals as the rats' neurons fire in sequences that represent stored memories. Foster would like to carry out similar studies in humans, but he cannot just go sticking electrodes deep into people's brains. So he first needs to develop less-invasive procedures.

Foster and William Anderson, an associate professor of neurosurgery in the School of Medicine, are now developing such techniques, piggybacking on research that Anderson's group does on epilepsy patients wherein they collect and analyze electrical data gathered from the surface of the brain. By piloting their study on a small sample of patients, the researchers hope to strengthen their position for applying for a larger grant, possibly from the NIH.

Bedny and Foster, both assistant professors, say that institute funding has allowed them to take on projects that might have otherwise been too risky and uncertain for an untenured faculty member. "I probably would not do too much looking outside of my own area to collaborate if I wasn't pushed and incentivized to do so by this kind of mechanism," Foster says. "This allows me, and pays me, to invest in thinking outside of my own small area."

The research grant program is the Science of Learning Institute's first major initiative, and many of the projects from the initial funding round are close to reporting results. The institute plans to continue awarding grants for at least three more years, and possibly more, depending on funding. To assess the program's success, Landau and Fisher are tracking metrics such as publications that awardees produce and external awards that leverage institute-funded work.

The institute also just launched its second big initiative: the Distinguished Science of Learning Fellowship Program. This program will award around five postdoctoral and predoctoral fellowships annually to students wanting to pursue interdisciplinary research in learning. Each fellow will have two advisers from different disciplines.

The fellows also will play a key role in the third prong of the institute's mission: translating and disseminating results beyond academia. Traditionally, much of the learning that occurs in the nation's formal classrooms and more informal settings is not as informed by research as it could be, says Fisher. To help change that, the Science of Learning Institute recently launched partnerships with the Port Discovery Children's Museum in Baltimore and the Children's Museum of Manhattan in New York to develop exhibits that are based on the research into the science of learning. The institute also plans to hire a dissemination expert to help translate research results into classrooms and other learning settings.

The Science of Learning Institute's stated mission is "to understand and optimize the most essential part of our human capital: the ability to learn." The mission makes the institute a crucial catalyst at a university—a place dedicated to learning—where all the pieces are already in place to make major progress on one of the most important scientific questions of our time, says Landau. "One of the goals of the Science of Learning Institute," she says, "is really to sew together the parts of the university that haven't yet interacted—to make it, in President Daniels' words, one university."

Source: JHU

New Stanford research finds computers are better judges of personality than friends and family

Written By Unknown on Friday, January 30, 2015 | 5:53 PM

New research shows that a computer's analysis of data can better judge a person's psychological traits than family and friends.
Computers can judge personality traits far more precisely than ever believed, according to newly published research.

In fact, they might do so better than one's friends and colleagues. The study, published Jan. 12 and conducted jointly by researchers at Stanford University and the University of Cambridge, compares the ability of computers and people to make accurate judgments about our personalities. People's judgments were based on their familiarity with the judged individual, while the computer used digital signals – Facebook "likes."

The researchers were Michal Kosinski, co-lead author and a postdoctoral fellow at Stanford's Department of Computer Science; Wu Youyou, co-lead author and a doctoral student at the University of Cambridge; and David Stillwell, a researcher at the University of Cambridge.

According to Kosinski, the findings reveal that by mining a person's Facebook "likes," a computer was able to predict a person's personality more accurately than most of their friends and family. Only a person's spouse came close to matching the computer's results.

The computer predictions were based on which articles, videos, artists and other items the person had liked on Facebook. The idea was to see how closely a computer prediction could match the subject's own scores on the five most basic personality dimensions: openness, conscientiousness, extraversion, agreeableness and neuroticism.

The researchers noted, "This is an emphatic demonstration of the ability of a person's psychological traits to be discovered by an analysis of data, not requiring any person-to-person interaction. It shows that machines can get to know us better than we'd previously thought, a crucial step in interactions between people and computers."

Kosinski, a computational social scientist, pointed out that "the findings also suggest that in the future, computers could be able to infer our psychological traits and react accordingly, leading to the emergence of emotionally intelligent and socially skilled machines."

"In this context," he added, "the human-computer interactions depicted in science fiction films such as Her seem not to be beyond our reach."

He said the research advances previous work from the University of Cambridge in 2013 that showed that a variety of psychological and demographic characteristics could be "predicted with startling accuracy" through Facebook likes.

The study's methodology

In the new study, researchers collected personality self-ratings of 86,220 volunteers using a standard, 100-item long personality questionnaire. Human judges, including Facebook friends and family members, expressed their judgment of a subject's personality using a 10-item questionnaire. Computer-based personality judgments, based on their Facebook likes, were obtained for the participants.

The results showed that a computer could more accurately predict the subject's personality than a work colleague by analyzing just 10 likes; more than a friend or a roommate with 70; a family member with 150; and a spouse with 300 likes.

"Given that an average Facebook user has about 227 likes (and this number is growing steadily), artificial intelligence has a potential to know us better than our closest companions do," wrote Kosinski and his colleagues.

Why are machines better in judging personality than human beings?

Kosinski said that computers have a couple of key advantages over human beings in the area of personality analysis. Above all, they can retain and access large quantities of information, and analyze all this data through algorithms.

This provides the accuracy that the human mind has a hard time achieving due to a human tendency to give too much weight to one or two examples or to lapse into non-rational ways of thinking, the researchers wrote.

Nevertheless, the authors concede that the detection of some personality traits might be best left to human beings, such as "those (traits) without digital footprints and those depending on subtle cognition."

'Digital footprints'

Wu, co-lead author of the study, explains that the plot behind a movie like Her (released in 2013) becomes increasingly realistic. The film involves a man who strikes up a relationship with an advanced computer operating system that promises to be an intuitive entity in its own right.

"The ability to accurately assess psychological traits and states, using digital footprints of behavior, occupies an important milestone on the path toward more social human-computer interactions," said Wu.

Such data-driven decisions could improve people's lives, the researchers said. For example, recruiters could better match candidates with jobs based on their personality, and companies could better match products and services with consumers' personalities.

"The ability to judge personality is an essential component of social living – from day-to-day decisions to long-term plans such as whom to marry, trust, hire or elect as president," said Stillwell.

Dystopia concerns

The researchers acknowledge that this type of research may conjure up privacy concerns about online data mining and tracking the activities of users.

"A future with our habits being an open book may seem dystopian to those who worry about privacy," they wrote.

Kosinski said, "We hope that consumers, technology developers and policymakers will tackle those challenges by supporting privacy-protecting laws and technologies, and giving the users full control over their digital footprints."

In July, Kosinski will begin a new appointment as an assistant professor at Stanford Graduate School of Business.

Source: Stanford university

How to Learn math without fear, Stanford expert says

Stanford Prof. Boaler finds that children who excel in math learn to develop "number sense," which is much different from the memorization that is often stressed in school.
Image Credit: THEPLANETWALL STOCK
Students learn math best when they approach the subject as something they enjoy, according to a Stanford education expert. Speed pressure, timed testing and blind memorization pose high hurdles in the youthful pursuit of math.

"There is a common and damaging misconception in mathematics – the idea that strong math students are fast math students," said Jo Boaler, a Stanford professor of mathematics education and the lead author on a new working paper. Boaler's co-authors are Cathy Williams, cofounder of Stanford'sYouCubed, and Amanda Confer, a Stanford graduate student in education. 

Curriculum timely

Fortunately, said Boaler, the new national curriculum standards known as the Common Core Standards for K-12 schools de-emphasize the rote memorization of math facts. Maths facts are fundamental assumptions about math, such as the times tables (2 x 2 = 4), for example. Still, the expectation of rote memorization continues in classrooms and households across the United States.

While research shows that knowledge of math facts is important, Boaler said the best way for students to know math facts is by using them regularly and developing understanding of numerical relations. Memorization, speed and test pressure can be damaging, she added.

On the other hand, people with "number sense" are those who can use numbers flexibly, she said. For example, when asked to solve the problem of 7 x 8, someone with number sense may have memorized 56, but they would also be able to use a strategy such as working out 10 x 7 and subtracting two 7s (70-14).

"They would not have to rely on a distant memory," Boaler wrote.

In fact, in one research project the investigators found that the high-achieving students actually used number sense, rather than rote memory, and the low-achieving students did not.

The conclusion was that the low achievers are often low achievers not because they know less but because they don't use numbers flexibly.

"They have been set on the wrong path, often from an early age, of trying to memorize methods instead of interacting with numbers flexibly," she wrote. Number sense is the foundation for all higher-level mathematics, she noted. 

Role of the brain

Boaler said that some students will be slower when memorizing, but still possess exceptional mathematics potential.

"Math facts are a very small part of mathematics, but unfortunately students who don't memorize math facts well often come to believe that they can never be successful with math and turn away from the subject," she said.

Prior research found that students who memorized more easily were not higher achieving – in fact, they did not have what the researchers described as more "math ability" or higher IQ scores. Using an MRI scanner, the only brain differences the researchers found were in a brain region called the hippocampus, which is the area in the brain responsible for memorizing facts – the working memory section.

But according to Boaler, when students are stressed – such as when they are solving math questions under time pressure – the working memory becomes blocked and the students cannot as easily recall the math facts they had previously studied. This particularly occurs among higher achieving students and female students, she said.

Some estimates suggest that at least a third of students experience extreme stress or "math anxiety" when they take a timed test, no matter their level of achievement. "When we put students through this anxiety-provoking experience, we lose students from mathematics," she said.

Boaler contrasts the common approach to teaching math with that of teaching English. In English, a student reads and understands novels or poetry, without needing to memorize the meanings of words through testing. They learn words by using them in many different situations – talking, reading and writing.

"No English student would say or think that learning about English is about the fast memorization and fast recall of words," she added.

Strategies, activities 

In her paper, "Fluency without Fear," Boaler provides activities for teachers and parents that help students learn math facts at the same time as developing number sense. These include number talks, addition and multiplication activities, and math cards.

Importantly, she said, these activities include a focus on the visual representation of number facts. When students connect visual and symbolic representations of numbers, they are using different pathways in the brain, which deepens their learning, as shown by recent brain research.

"Math fluency" is often misinterpreted, with an over-emphasis on speed and memorization, she said. "I work with a lot of mathematicians, and one thing I notice about them is that they are not particularly fast with numbers; in fact some of them are rather slow. This is not a bad thing; they are slow because they think deeply and carefully about mathematics."

She refers to the famous French mathematician, Laurent Schwartz, who wrote in his autobiography that he often felt stupid in school, as he was one of the slowest math thinkers in class.
Math anxiety and fear play a big role in students dropping out of mathematics, said Boaler.

"When we emphasize memorization and testing in the name of fluency we are harming children, we are risking the future of our ever-quantitative society and we are threatening the discipline of mathematics. We have the research knowledge we need to change this and to enable all children to be powerful mathematics learners. Now is the time to use it," she said.

Source: Standford Unversity

On the ups and downs of the seemingly idle brain

Cortical colors Inhibitory cells abound in the barrel cortex of the mouse, where three main types were labeled to fluoresce in different colors: PV (red), SOM (blue), and 5HT3aR, which includes VIP and NPY, (green). Image: Connors lab/Brown University
Even when it seems not to be doing much, the brain maintains a baseline of activity in the form of up and down states of bustle and quiet. To accomplish this seemingly simple cycle, it maintains a complex balance between the activity of many excitatory and inhibitory cells, Brown University scientists report in the Journal of Neuroscience.

PROVIDENCE, R.I. [Brown University] — Even in its quietest moments, the brain is never “off.” Instead, while under anesthesia, during slow-wave sleep, or even amid calm wakefulness, the brain’s cortex maintains a cycle of activity and quiet called “up” and “down” states. A new study by Brown University neuroscientists probed deep into this somewhat mysterious cycle in mice, to learn more about how the mammalian brain accomplishes it.

In addition to an apparent role in maintaining a baseline of brain activity, the up and down cycling serves as a model for other ways in which activity across the cortex is modulated, said Garrett Neske, graduate student and lead author. To study how the brain maintains this cycling, he found, is to learn how the brain walks a healthy line between excitement and inhibition as it strives to be idle but ready, a bit like a car at a stoplight.
Garrett Neske To study how the brain maintains up and down cycles is to learn how the brain strives to be idle but ready, a bit like a car at a stoplight. Photo: David Orenstein/Brown University
“It is very important to regulate that balance of excitation and inhibition,” said senior author Barry Connors, professor and chair of neuroscience at Brown. “Too much excitation relative to inhibition you get a seizure, too little you become comatose. So whether you are awake and active and processing information or whether you are in some kind of idling state of the brain, you need to maintain that balance.”

The cycling may seem simple, but what Neske and Connors found in their investigation, published in the Journal of Neuroscience, is that it involves a good deal of complexity. They focused on five different types of cells in a particular area of the mouse cortex and found that all five appear to contribute uniquely to the ups and downs.

Cells in a barrel

Specifically the researchers, including Saundra Patrick, neuroscience research associate and second author, looked at the activity of excitatory pyramidal cells and four kinds of inhibitory interneurons (PV, SOM, VIP and NPY) in different layers of the barrel cortex. That part of the cortex is responsible for processing sensations on the face, including the whiskers.

Neske induced up and down cycles in slices of tissue from the barrel cortex and recorded each cell type’s electrical properties and behaviors, such as its firing rate and the amounts of excitation and inhibition they received from other neurons.

The picture that emerged is that all types of interneurons were active. This included the most abundant interneuron subtype (the fast-spiking PV cell), and the various more slowly spiking subtypes (SOM, VIP, NPY). In fact, Connors said, the latter cells were active at levels similar to or higher than neighboring excitatory cells, contributing strong inhibition during the up state.

One way such findings are important is in how they complement recent ones by another research group at Yale University. In that study scientists looked at a different part of the cortex called the entorhinal cortex. There they found that only one inhibitory neuron, PV, seemed to be doing anything in the up state to balance out the excitement of the pyramidal neurons. The other inhibitory neurons stayed virtually silent. In his study, Neske replicated those results.

Taken together, the studies indicate that even though up and down cycles occur throughout the cortex, they may be regulated differently in different parts.

“It suggests that inhibition plays different roles in persistent activity in these two regions of cortex and it calls for more comparative work to be done among cortical areas,” Neske said. “You can’t just use one cortical region as the model for all inhibitory interneuron function.”

From observation to manipulation

Since observing the different behaviors of the neuron types, Neske has moved on to manipulating them to see what role each of them plays. Using the technique of optogenetics, in which the firing of different neuron types can be activated or suppressed with pulses of colored light, Neske is experimenting with squelching different interneurons to see how their enforced abstention affects the up and down cycle.

When the work is done, he should emerge with an even clearer idea of the brain’s intricate and diligent efforts to remain balanced between excitation and inhibition.

The National Institutes of Health (grants NS-050434, MH-086400, and T32NS062443) and the Defense Advanced Research Projects Agency (grant DARPA-BAA-09-27) supported the research.

Source: Brown University

Smart device delivers results for kids with asthma

Written By Unknown on Thursday, January 29, 2015 | 4:28 AM

Smart device
A new smart asthma inhaler with an audio-visual function has dramatically improved child and adolescent use of preventative asthma medication.

The users also experienced significant improvements to their symptoms, well-being and quality of life and needed their reliever medication less frequently.

The University of Auckland study, funded by Cure Kids and the Health Research Council, showed a significant improvement in night time awakening, coughing and wheezing.

Clinical pharmacist, Amy Chan, a doctoral student with the University of Auckland, is the lead author on the paper.  

“We know one of the key reasons for children not taking their medication is parent and patient forgetfulness.  The Smartinhaler reminder system is now clinically proven to be a real solution to the problem,” she says.

“What we’ve been able to establish for the first time with this study is that the ringtone Smartinhaler significantly improves adherence to preventative medication, which results in improved quality of life for children with asthma. It’s hugely exciting,” says Ms Chan.

Children in the study were also given a Smartinhaler tracker for their rescue or ‘blue’ inhaler to measure the amount of rescue medication they used. The device was able to objectively count date and time of rescue medication use. This provided a good indication of asthma being out of control.

When symptoms worsened participants used their rescue reliever inhaler (blue inhaler), which is also known as a rescue medication because it provides immediate relief.  Recent studies have shown that overuse of the blue inhaler is a predictor of worsening asthma and general morbidity.

The study found that use of the rescue medication was significantly reduced in the group using the Nexus6 Smartinhaler reminder device.

Cure Kids Chair of Child Health Research and Ms Chan's supervisor on the study, Professor Ed Mitchell, says he is “absolutely staggered by the size of the effect. To see the improvement in the lives of these children is astounding.”

The participants also reported taking part in more sports and family activities. Parents reported feeling less frightened by their child’s asthma.

New Zealand has the second highest rates of asthma in the world and one in four Kiwi children experiences asthma symptoms. Despite this, regular adherence to asthma medication is poor.

New Zealand digital health company Nexus6 Ltd created the new Smartinhaler device called the SmartTrack, which was used in the study. The device has 14 different ringtones, which are cycled so users don’t get reminder fatigue. The SmartTrack reminder is only triggered when a dose is missed.

The results were published this month in The Lancet Respiratory Medical Journal.  To the researchers’ knowledge, this is the largest study in the world to investigate the effects of an inhaler device with audio-visual reminder function on asthma adherence and outcomes in children and adolescents.

It is also the first to show significant benefits in asthma outcomes and quality of life. The results are expected to gain international interest.

The controlled trial recruited 220 children between the ages of six and 15 who presented to emergency departments with asthma symptoms.

The study was randomised with half of the participants receiving a SmartTrack device for use with their preventative or ‘orange’ inhaler that had the audiovisual elements turned on, and the other half receiving the same device with the audiovisual elements turned off.

Participants were followed up every two months for six months and general asthma control was checked.

Key findings from the study were:

Medication adherence rate for the patient group given the audiovisual enabled SmartTrack inhaler were 84 percent compared to 30 percent for the control group. This equals a 180% increase in medication adherence.
The use of emergency medication or the ‘blue’ inhaler was significantly reduced. The median percentage days on which a reliever was used in the intervention group was 9.5 percent compared to 17.4 percent in the control group. This equals a 45percentreduction in rescue medication use.
Symptoms, well-being and quality of life for the children was significantly improved.

Source: Auckland University

Structure of Neuron-Connecting Synaptic Adhesion Molecules Discovered

Written By Unknown on Wednesday, January 28, 2015 | 9:22 PM

Figure 1: Overview of the PTPd Ig1–3/Slitrk1 LRR1 complex. Copyright : Korea Advanced Institute of Science and Technology
A research team has found the three-dimensional structure of synaptic adhesion molecules, which orchestrate synaptogenesis. The research findings also propose the mechanism of synapses in its initial formation.

Some brain diseases such as obsessive compulsive disorder (OCD) or bipolar disorders arise from a malfunction of synapses. The team expects the findings to be applied in investigating pathogenesis and developing medicines for such diseases.

The research was conducted by a Master’s candidate Kee Hun Kim, Professor Ji Won Um from Yonsei University, and Professor Beom Seok Park from Eulji University under the guidance of Professor Homin Kim from the Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), and Professor Jaewon Ko from Yonsei University. Sponsored by the Ministry of Science, ICT and Future Planning and the National Research Foundation of Korea, the research findings were published online in the November 14th issue of Nature Communications.
Figure 2: Representative negative-stained electron microscopy images of Slitrk1 Full ectodomain (yellow arrows indicate the horseshoe-shaped LRR domains). The typical horseshoe-shaped structures and the randomness of the relative positions of each LRR domain can be observed from the two-dimensional class averages displayed in the orange box. Copyright : Korea Advanced Institute of Science and Technology
A protein that exists in the neuronal transmembrane, Slitrk, interacts with the presynaptic leukocyte common antigen-related receptor protein tyrosine phosphatases (LAR-RPTPs) and forms a protein complex. It is involved in the development of synapses in the initial stage, and balances excitatory and inhibitory signals of neurons.

It is known that a disorder in those two proteins cause a malfunction of synapses, resulting in neuropsychosis such as autism, epilepsy, OCD, and bipolar disorders. However, because the structure as well as synaptogenic function of these proteins were not understood, the development of cures could not progress.

The research team discovered the three-dimensional structure of two synaptic adhesion molecules like Slitrk and LAR-RPTPs and identified the regions of interaction through protein crystallography and transmission electron microscopy (TEM). Furthermore, they found that the formation of the synapse is induced after the combination of two synaptic adhesion molecules develops a cluster.
Figure 3: Model of the two-step presynaptic differentiation process mediated by the biding of Slitrks to LAR-RPTPs and subsequent lateral assembly of trans-synaptic LAR-RPTPs/Slitrik complexes. Copyright : Korea Advanced Institute of Science and Technology
Professor Kim said, “The research findings will serve as a basis of understanding the pathogenesis of brain diseases which arises from a malfunction of synaptic adhesion molecules. In particular, this is a good example in which collaboration between structural biology and neurobiology has led to a fruitful result.” Professor Ko commented that “this will give new directions to synaptic formation-related-researches by revealing the molecular mechanism of synaptic adhesion molecules.”

Source: KAIST

Why Do We Feel Thirst? An Interview with Yuki Oka

Written By Unknown on Tuesday, January 27, 2015 | 6:52 PM

Credit: Lance Hayashida/Caltech Marketing and Communications
To fight dehydration on a hot summer day, you instinctively crave the relief provided by a tall glass of water. But how does your brain sense the need for water, generate the sensation of thirst, and then ultimately turn that signal into a behavioral trigger that leads you to drink water? That's what Yuki Oka, a new assistant professor of biology at Caltech, wants to find out.

Oka's research focuses on the study of how the brain and body work together to maintain a healthy ratio of salt to water as part of a delicate form of biological balance called homeostasis.

Recently, Oka came to Caltech from Columbia University. We spoke with him about his work, his interests outside of the lab, and why he's excited to be joining the faculty at Caltech.

Can you tell us a bit more about your research?

The goal of my research is to understand the mechanisms by which the brain and body cooperate to maintain our internal environment's stability, which is called homeostasis. I'm especially focusing on fluid homeostasis, the fundamental mechanism that regulates the balance of water and salt. When water or salt are depleted in the body, the brain generates a signal that causes either a thirst or a salt craving. And that craving then drives animals to either drink water or eat something salty.

I'd like to know how our brain generates such a specific motivation simply by sensing internal state, and then how that motivation—which is really just neural activity in the brain—goes on to control the behavior.

Why did you choose to study thirst?

After finishing my Ph.D. in Japan, I came to Columbia University where I worked on salt sensing mechanisms in the mammalian taste system. We found that the peripheral taste system has a key function for salt homeostasis in the body by regulating our salt intake behavior. But of course, the peripheral sensor does not work by itself.  It requires a controller, the brain, which uses information from the sensor. So I decided to move on to explore the function of the brain; the real driver of our behaviors.

I was fascinated by thirst because the behavior it generates is very robust and stereotyped across various species. If an animal feels thirst, the behavioral output is simply to drink water. On the other hand, if the brain triggers salt appetite, then the animal specifically looks for salt—nothing else. These direct causal relations make it an ideal system to study the link between the neural circuit and the behavior.

You recently published a paper on this work in the journal Nature. Could you tell us about those findings?

In the paper, we linked specific neural populations in the brain to water drinking behavior. Previous work from other labs suggested that thirst may stem from a part of the brain called the hypothalamus, so we wanted to identify which groups of neurons in the hypothalamus control thirst. Using a technique called optogenetics that can manipulate neural activities with light, we found two distinct populations of neurons that control thirst in two opposite directions. When we activated one of those two populations, it evoked an intense drinking behavior even in fully water-satiated animals. In contrast, activation of a second population drastically suppressed drinking, even in highly water-deprived thirsty animals.  In other words, we could artificially create or erase the desire for drinking water.

Our findings suggest that there is an innate brain circuit that can turn an animal's water-drinking behavior on and off, and that this circuit likely functions as a center for thirst control in the mammalian brain. This work was performed with support from Howard Hughes Medical Institute and National Institutes of Health [for Charles S. Zuker at Columbia University, Oka's former advisor].

You use a mouse model to study thirst, but does this work have applications for humans?

There are many fluid homeostasis-associated conditions; one example is dehydration. We cannot specifically say a direct application for humans since our studies are focused on basic research. But if the same mechanisms and circuits exist in mice and humans, our studies will provide important insights into human physiologies and conditions.

Where did you grow up—and what started your initial interest in science?

I grew up in Japan, close to Tokyo, but not really in the center of the city. It was a nice combination between the big city and nature. There was a big park close to my house and when I was a child, I went there every day and observed plants and animals. That's pretty much how I spent my childhood. My parents are not scientists—neither of them, actually. It was just my innate interest in nature that made me want to be a scientist.

What drew you to Caltech?

I'm really excited about the environment here and the great climate. That's actually not trivial; I think the climate really does affect the people. For example, if you compare Southern California to New York, it's just a totally different character. I came here for a visit last January, and although it was my first time at Caltech I kind of felt a bond. I hadn't even received an offer yet, but I just intuitively thought, "This is probably the place for me."

I'm also looking forward to talking to my colleagues here who use fMRI for human behavioral research. One great advantage about using human subjects in behavioral studies is that they can report back to you about how they feel. There are certainly advantages of using an animal model, like mice. But they cannot report back. We just observe their behavior and say, "They are drinking water, so they must be thirsty." But that is totally different than someone telling you, "I feel thirsty." I believe that combining advantages of animal and human studies should allow us to address important questions about brain functions.

Do you have any hobbies?

I play basketball in my spare time, but my major hobby is collecting fossils. I have some trilobites and, actually, I have a complete set of bones from a type of herbivorous dinosaur. It is being shipped from New York right now and I may put it in my new office.

Written by Jessica Stoller-Conrad


Source: California Institute of Technology

Virtual reality speeds up rehabilitation: Integrating force feedback into therapies

Written By Unknown on Sunday, January 18, 2015 | 4:47 PM

A child is receiving virtual door opening training under the guidance of a therapist. Credit: Copyright The Hong Kong Polytechnic University
The Hong Kong Polytechnic University has successfully developed a novel training programme using haptic technology for impaired hands that cannot function normally. This programme is unique as it provides force feedback, which creates a true sense of weight to the user through the control device.

Our hands are essential to our lives; we need them in all daily tasks including eating, bathing and getting dressed. However, even the simplest tasks are challenging for people with impaired hands due to various conditions such as cerebral palsy, stroke and ageing. Fortunately, they will soon benefit from a new training technology which may greatly improve their conditions.

In response to therapeutic needs, a computerized training programme against impaired hands has been developed at the School of Nursing of The Hong Kong Polytechnic University. Patients being trained are supposed to exercise their hands through playing a series of well-designed computer games that simulate everyday tasks, such as opening a locked door with a key or pouring tea into a cup. While playing, their hand movements are monitored and recorded by a haptic device, which is connected to the control unit held by the patient at one end, and a computer at the other. The haptic device then feeds the data into the computer, resulting in the instant reflection of the patient's actions in the animation on screen.

In addition, the haptic technology which the programme employs is more true-to-life than similar programmes, as feedback is provided through the force created by the control unit to players. For example, they can literally feel the weight of a simulated bottle diminishing as the water is being poured out. Such kind of precision will greatly enhance training effectiveness and improve the patient's coordination.

Game-based therapies are highly motivating. Firstly, playing 3D games in colourful animation is more interesting than monotonous physical exercises. Secondly, a reward system incorporated in the programme is sure to fuel a sense of competition and accomplishment. "Our games are designed to be engaging. When players make successful attempts, they get bonus points. And as they win, they move on to the next level, where more attractive rewards are waiting," said Dr Kup-sze Choi, the leader of the research team. It is satisfying for players to work their way up and keep going with the therapy, thereby improving their hand functions.

Compared to physical training, computer simulated training is a safer option when sharp or breakable objects are involved, making practices on preparing simple meals with a knife possible. It is also less likely to be interrupted by undesired circumstances. Dr Choi explained, "For instance, the hands of cerebral palsy sufferers are usually stiff, weak and prone to uncontrolled movements. If they practise pouring real tea in repeated sessions, they may make spills all over the place and end up soaking wet, requiring the healthcare workers to clean up the mess. That is not a good thing for both the trainee and the trainer." With computer simulation, there will be no such interruptions.

To cater to different degrees of disability, the programme has a built-in difficulty mode with which the level of difficulty can be adjusted with the touch of a button. Therapists can also monitor their patients' progress easily, as the system keeps track of their movements and performance.

The effectiveness of this training programme was preliminarily confirmed, as a similar tool aimed to improve hand-writing was tested on the children at the Hong Kong Red Cross Princess Alexandra School. The results have shown a marked improvement in the time they needed to complete the task after two weeks of training. More tests and trials are on the way, and the team expect that a longer period of computer-assisted training will yield greater benefits. The training system has already won a Silver Medal at the 42nd International Exhibition of Inventions of Geneva in Switzerland.

According to Dr Choi, computer simulated training using haptic technology will widen the access to rehabilitation and help more patients with impaired hands . In the future, the team will work on combining this computer-aided rehabilitation programme with traditional therapy in order to optimize the training system and benefit more patients. The prototype of the haptic platform customized for self-care training Copyright : The Hong Kong Polytechnic University The haptic platform technology developed by Dr Kup-sze Choi and his team has won a Silver Medal at the 42nd International Exhibition of Inventions of Geneva. 
Copyright : The Hong Kong Polytechnic University

Gut microbiota influences blood-brain barrier permeability

Uptake of the substance Raclopride in the brain of germ-free versus conventional mice.
Credit: Miklos Toth
A new study in mice, conducted by researchers at Sweden's Karolinska Institutet together with colleagues in Singapore and the United States, shows that our natural gut-residing microbes can influence the integrity of the blood-brain barrier, which protects the brain from harmful substances in the blood. According to the authors, the findings provide experimental evidence that our indigenous microbes contribute to the mechanism that closes the blood-brain barrier before birth. The results also support previous observations that gut microbiota can impact brain development and function.

The blood-brain barrier is a highly selective barrier that prevents unwanted molecules and cells from entering the brain from the bloodstream. In the current study, being published in the journal Science Translational Medicine, the international interdisciplinary research team demonstrates that the transport of molecules across the blood-brain barrier can be modulated by gut microbes -- which therefore play an important role in the protection of the brain.

The investigators reached this conclusion by comparing the integrity and development of the blood-brain barrier between two groups of mice: the first group was raised in an environment where they were exposed to normal bacteria, and the second (called germ-free mice) was kept in a sterile environment without any bacteria.

"We showed that the presence of the maternal gut microbiota during late pregnancy blocked the passage of labeled antibodies from the circulation into the brain parenchyma of the growing fetus," says first author Dr. Viorica Braniste at the Department of Microbiology, Tumor and Cell Biology at Karolinska Institutet. "In contrast, in age-matched fetuses from germ-free mothers, these labeled antibodies easily crossed the blood-brain barrier and was detected within the brain parenchyma."

The team also showed that the increased 'leakiness' of the blood-brain barrier, observed in germ-free mice from early life, was maintained into adulthood. Interestingly, this 'leakiness' could be abrogated if the mice were exposed to fecal transplantation of normal gut microbes. 

The precise molecular mechanisms remain to be identified. However, the team was able to show that so-called tight junction proteins, which are known to be important for the blood-brain barrier permeability, did undergo structural changes and had altered levels of expression in the absence of bacteria.

According to the researchers, the findings provide experimental evidence that alterations of our indigenous microbiota may have far-reaching consequences for the blood-brain barrier function throughout life.

"These findings further underscore the importance of the maternal microbes during early life and that our bacteria are an integrated component of our body physiology," says Professor Sven Pettersson, the principal investigator at the Department of Microbiology, Tumor and Cell Biology. "Given that the microbiome composition and diversity change over time, it is tempting to speculate that the blood-brain barrier integrity also may fluctuate depending on the microbiome. This knowledge may be used to develop new ways for opening the blood-brain-barrier to increase the efficacy of the brain cancer drugs and for the design of treatment regimes that strengthens the integrity of the blood-brain barrier."

Transplant drug could boost power of brain tumor treatments, study finds

Drs. Maria Castro and Pedro Lowenstein, both of the U-M Department of Neurosurgery, co-led the research. Credit: Image courtesy of University of Michigan Health System
Every day, organ transplant patients around the world take a drug called rapamycin to keep their immune systems from rejecting their new kidneys and hearts. New research suggests that the same drug could help brain tumor patients by boosting the effect of new immune-based therapies.

In experiments in animals, researchers from the University of Michigan Medical School showed that adding rapamycin to an immunotherapy approach strengthened the immune response against brain tumor cells.

What's more, the drug also increased the immune system's "memory" cells so that they could attack the tumor if it ever reared its head again. The mice and rats in the study that received rapamycin lived longer than those that didn't.

Now, the U-M team plans to add rapamycin to clinical gene therapy and immunotherapy trials to improve the treatment of brain tumors. They currently have a trial under way at the U-M Health System which tests a two-part gene therapy approach in patients with brain tumors called gliomas in an effort to get the immune system to attack the tumor. In future clinical trials, adding rapamycin could increase the therapeutic response.

The new findings, published online in the journal Molecular Cancer Therapeutics, show that combining rapamycin with a gene therapy approach enhanced the animals' ability to summon immune cells called CD8+ T cells to kill tumor cells directly. Due to this cytotoxic effect, the tumors shrank and the animals lived longer.

But the addition of rapamycin to immunotherapy even for a short while also allowed the rodents to develop tumor-specific memory CD8+ T cells that remember the specific "signature" of the glioma tumor cells and attacked them swiftly when a tumor was introduced into the brain again.

"We had some indication that rapamycin would enhance the cytotoxic T cell effect, from previous experiments in both animals and humans showing that the drug produced modest effects by itself," says Maria Castro, Ph.D., senior author of the new paper. Past clinical trials of rapamycin in brain tumors have failed.

"But in combination with immunotherapy, it became a dramatic effect, and enhanced the efficacy of memory T cells too. This highlights the versatility of the immunotherapy approach to glioma." Castro is the R.C. Schneider Collegiate Professor of neurosurgery and a professor of cell and developmental biology at U-M.

Rapamycin is an FDA-approved drug that produces few side effects in transplant patients and others who take it to modify their immune response. So in the future, Castro and her colleagues plan to propose new clinical trials that will add rapamycin to immune gene therapy trials like those already ongoing at UMHS.

She notes that other researchers currently studying immunotherapies for glioma and other brain tumors should also consider doing the same. "This could be a universal mechanism for enhancing efficacy of immunotherapies in glioma," she says.

Rapamycin inhibits a specific molecule in cells, called mTOR. As part of the research, Castro and her colleagues determined that brain tumor cells use the mTOR pathway to hamper the immune response of patients.

This allows the tumor to trick the immune system, so it can continue growing without alerting the body's T cells that a foreign entity is present. Inhibiting mTOR with rapamycin, then, uncloaks the cells and makes them vulnerable to attack.

Castro notes that if the drug proves useful in human patients, it could also be used for long-term prevention of recurrence in patients who have had the bulk of their tumor removed. "This tumor always comes back," she says.

 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. The planet wall - All Rights Reserved
Template Created by Easy Blogging Published by Mas Template
Proudly powered by Blogger