Latest Post
Showing posts with label COMPUTER MODELING. Show all posts
Showing posts with label COMPUTER MODELING. Show all posts

Competing in Robotics

Written By Unknown on Friday, February 6, 2015 | 7:58 PM

RoboCup Junior is an international competition for the construction and programming of robots
RoboCup Junior is an international competition for the construction and programming of robots. It’s a part of the major RoboCup initiative – one of the biggest robot competitions in the world, with thousands of participants from over 40 countries.

Linköping students are organising the Swedish qualifying rounds for RoboCup Junior (in Swedish Junior-VM i robotik) where children and young people up to the age of 19 can take part. The winners get to represent Sweden in the 2015 international finals, which will be held in Hefei, China.

Mr Löfgren, who is in the fourth year of his studies for a master’s in Engineering Physics and Electronics, is a previous participant in the competition and current project leader for the RoboCup Junior finals in Linköping. He is the youngest ever member of the technical committee, which is mainly composed of eminent researchers and teachers.

“RoboCup Junior has been held in Sweden since 2009. I took part the very first year, won
the competition, and got to represent Sweden at the world championship in Austria.”

Since then, Mr Löfgren has won many competitions, in which he got to do things such as represent Sweden in Singapore and compete in the World Championships in Istanbul.

“After that I was too old; in 2012 I became team leader for one team and a judge for the national competition. I was also appointed to the international organizing committee of RoboCup Junior Rescue, and the technical committee.”

He has also been involved on the international stage, for example in Brazil where he wrote the rules for the next year’s competition. He was also team leader at the World Championships in Eindhoven. In 2013, Mr Löfgren started a student society whose aim was to organise RoboCup Junior in Sweden. The FIA student association (the Intelligent Autonomous Systems Society) grew, and now RoboCup Junior is just one of many events the society organises each year.

How do you select the participants?

“I was chosen as project leader for the competition by the board of the FIA, and then I appointed a project team of five people to help me plan and organise the competition in Linköping.”

The FIA is also organising a competition for university students and the public in conjunction with RoboCup Junior, so that more people can get the chance to compete with robots.

“I love to compete and I’ve competed in knowledge for a very long time,” Mr Löfgren says.

Robots around the dinner tableWhat has your involvement given you in practical terms?
“Being involved with robots has given me a great advantage in my studies here at the university. I have learned a great deal not only about electronics, programming and construction, but also about leadership and other cultures on my many travels, as well as how to collaborate on international projects.”

Mr Löfgren thinks it’s great to see how older researchers and professors listen to what he has to say, and he is looking forward to the next cooking competition that will be held in Madrid in November. It will consist of seeing how well the robots manage to cook tomato soup, write a shopping list and find and switch off a stove hob that has been left on.

He has already been offered jobs, but turned them down as he wants to finish his studies first before he starts his “real” working life.

Developing robots for space, robots that explore other planets and robots that work in caring for the elderly by doing all the heavy work so that staff can devote time to their personal contact with elderly people, are examples of dream jobs.

“I want to develop the technology of tomorrow and I’m open to everything that has to do with the development of technology. As I have worked with robots for 15 years, they are very dear to my heart.”

Text: Zen Dinah, student reporter
Photo: Julius Jeuthe, student photographer

Source: Linköping University

'Microlesions' in epilepsy discovered by novel technique

Written By Unknown on Friday, January 16, 2015 | 10:25 PM

Clusters of differentially expressed genes predict cellular abnormalities. Credit: Jeffrey Loeb
Using an innovative technique combining genetic analysis and mathematical modeling with some basic sleuthing, researchers have identified previously undescribed microlesions in brain tissue from epileptic patients. The millimeter-sized abnormalities may explain why areas of the brain that appear normal can produce severe seizures in many children and adults with epilepsy.

The findings, by researchers at the University of Illinois at Chicago College of Medicine, Wayne State University and Montana State University, are reported in the journal Brain.
Epilepsy affects about 1 percent of people worldwide. Its hallmark is unpredictable seizures that occur when groups of neurons in the brain abnormally fire in unison. Sometimes epilepsy can be traced back to visible abnormalities in the brain where seizures start, but in many cases, there are no clear abnormalities or scaring that would account for the epileptic activity.

"Understanding what is wrong in human brain tissues that produce seizures is critical for the development of new treatments because roughly one third of patients with epilepsy don't respond to our currently available medications," said Dr. Jeffrey Loeb, professor and head of neurology and rehabilitation in the UIC College of Medicine and corresponding author on the study. "Knowing these microlesions exist is as huge step forward in our understanding of human epilepsy and present new targets for treating this disease."

Loeb and colleagues searched for cellular changes associated with epilepsy by analyzing thousands genes in tissues from 15 patients who underwent surgery to treat their epilepsy. They used a mathematical modeling technique called cluster analysis to sort through huge amounts of genetic data.

Using the model, they were able to predict and then confirm the presence of tiny regions of cellular abnormalities -- the microlesions -- in human brain tissue with high levels of epileptic electrical activity, or 'high-spiking' areas where seizures begin.

"Using cluster analysis is like using a metal detector to find a needle in a haystack," said Loeb. The model, he said, revealed 11 gene clusters that "jumped right out at us" and were either up-regulated or down-regulated in tissue with high levels of epileptic electrical activity compared to tissue with less epileptic activity from the same patient.

When they matched the genes to the types of cells they came from, the results predicted that there would be reductions of certain types of neurons and increases in blood vessels and inflammatory cells in brain tissue with high epileptic activity.

When Fabien Dachet, an expert in bioinformatics research at UIC and first author of the study, went back to the tissue samples and stained for these cells, he found that all of the prediction were correct- there was a marked increase in blood vessels, inflammatory cells, and there were focal microlesions made up of neurons that had lost most of their normal connections that allow them to communicate with one another. "We think that these newly found microlesions lead to spontaneous, abnormal electrical currents in the brain that lead to epileptic seizures," said Loeb.

Loeb and his colleagues at UIC are using the same approach to look for the clusters of differentially expressed genes associated with ALS, a neurodegenerative disease, and in brain tumors. "We now have a way to predict cellular changes by simply measuring the genetic composition, with some fairly simple calculations, between more- and less-affected epileptic human tissues," explained Loeb.

"This technique gives us the ability to discover previously unknown cellular abnormalities in almost any disease where we have access to human tissues," Loeb said. He is currently developing at UIC a national 'neurorepository' of electrically mapped and genetically analyzed brain tissue for such studies.

Sculpting costumes with 3-D printers is 'the way theater is headed,' say theater education experts

Written By Unknown on Wednesday, January 14, 2015 | 3:40 AM

Baylor junior Mackenzie Dobbs, a theatre performance major, in a witch's costume decorated with beans and mushrooms produced from a 3D printer. Credit: Drapers: Sylvia Fuhrken and Ryan Schapp, Photo by Jared Tseng
Three-dimensional printers, which already have churned out jewelry, prosthetic limbs and one fully functioning car, are taking the stage -- literally -- in another arena: live theater.

They allow greater speed, flexilibity, creativity -- and can appease directors who change their minds mid-rehearsal.

Synthetic beans and mushrooms -- accessories for the cursed, hump-backed witch in a Baylor University production of the musical "Into the Woods" -- recently emerged from a little machine tucked away in a corner of the costume shop at Baylor. And that's only the beginning for the new printer, says former Disneyland costume designer/wardrobe coordinator Joe Kucharski, assistant professor of theatre arts at Baylor.

Using his computer mouse and some free software, Kucharski tugged, flattened and pinched a digital "ball of clay" into the desired shapes: rotting vegetables, including two dozen beans and a dozen mushrooms. That done, the 3D printer heated and spun plastic cord into the delicate thread to create the costume elements for the witchy wardrobe.

Depending on the size and how complicated a design is, 3D printing may take 20 minutes to a couple hours.

"You can set a few buttons and walk away during printing," Kucharski said. "You can customize and print multiples, and you can use colors that are the whole range of the rainbow.

"Designers are always thinking, 'How can we design quickly but keep it adjustable so we're ready if the director says, 'Well, we're kinda there. . .'? We can go back and tweak quickly."

The printers have been used in film and fashion, and "it's a great application for scenic design in theater, too," he said. "You can use miniatures created on a small-scale model and save time instead of carving little details."

The 3D printer is rapidly becoming part of the "designer tool bag." While students still need to learn traditional drawing and creating, incorporating 3D technology into curriculum for costume and prop design can give them an edge in the job market.

"This is the way theatre is going," said Stan Denman, Ph.D., chair and professor of theatre arts at Baylor. "This even lets us create items that are no longer being produced -- like brooches or hatpins -- for period plays. Otherwise, because those things are antiques, the cost is prohibitive.

"This also can be helpful if you have an item that has to be broken in a scene," he said. "You can have multiple items to replace it for repeat performances."

Seeking reality in the future of aeronautical simulation

Written By Unknown on Thursday, January 8, 2015 | 9:22 AM

This CFD visualization required a NASA supercomputer to handle the intensive calculations. It shows a mesh adaptation used to simulate a transport aircraft in a high-lift configuration. Credit: NASA / Elizabeth Lee-Rausch, Michael Park
The right tool for the job. It's a platitude that is as true for garage tinkerers as it is for the NASA aeronautical innovators who are helping to design future airliners that will cut fuel consumption, reduce polluting emissions and fly more quietly.

Yet at least in one area -- namely computational fluid dynamics, or CFD -- the design tools that helped give us the modern airliners flying today are not expected to be up to the challenge in the future without some serious upgrades.

This was the finding of a report recently released by NASA called "CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences." It came out of a one-year study funded by NASA that included Boeing, Pratt & Whitney, Stanford University, The Massachusetts Institute of Technology, The University of Wyoming and The National Center for Supercomputing Applications.

The dilemma is that today's CFD, which simulates airflow around an airplane and through its jet engines, is largely designed to deal with aircraft with traditional tube and wing configurations that everyone is used to. And even then CFD's full effectiveness through all phases of flight is limited.

But future aircraft designs routinely flying during the 2030's may look very different from today's airliners in order to deliver on the promises of reduced fuel burn, noise and emissions.

Wings may be longer and skinnier and held up, or braced, by trusses. Aircraft hulls may be broader and flatter or have more pointed noses. Jet engines may be mounted on the top of the aircraft. Or the joint between a wing and the body may be blended into a seamless contour.

Understanding the physics behind how all of these new variables will affect airflow during all phases of flight, and then finding a way to model that in a computer simulation and validate the CFD is accurate, are the challenges facing NASA's computer experts right now.

"If we can get more physics into the models we're using with our CFD, we'll have a more general tool that can attack not only off-design conditions of conventional tube and wing aircraft, but it also will do better with the different looking configurations of the future," said Mike Rogers, an aerospace engineer at NASA's Ames Research Center in California.

Data from wind-tunnel testing of these new aircraft designs as they come along will help refine the CFD algorithms. The overarching goal is to improve the entire suite of testing capabilities -simulation, ground and flight test -- to provide a more effective, comprehensive toolbox for designers to use to advance the state of the art more quickly.

"It's an iterative process," Rogers said. "We need to continually assess how well our tools are working so we know whether they are adequate or not."

In the meantime, even as NASA's CFD experts work down a path toward their long-range future goals of 2030 -- advancements made possible only because of vast leaps in computer processing speed and power -- their first step is to meet a set of more immediate technical challenges as soon as 2017.

The first stepping-stone goal is to reduce by 40 percent the error in computing several flow phenomena for which current models fail to make accurate predictions; these flow features are likely to be encountered on some of the new aircraft configurations now being studied.

The report highlighted the need for upgrading not only the CFD algorithms, but also discussed how those new algorithms must be written to take advantage of the ever-increasing speed and complexity of future supercomputers.

Source: nasa

Live adaptation of organ models in the OR

The non-deformed liver model (red) adapts to the deformed surface profile (blue). Credit: Graphics: Dr. Stefanie Speidel, KIT, in Medical Physics, 41
During minimally invasive operations, a surgeon has to trust the information displayed on the screen: A virtual 3D model of the respective organ shows where a tumor is located and where sensitive vessels can be found. Soft tissue, such as the tissue of the liver, however, deforms during breathing or when the scalpel is applied. Endoscopic cameras record in real time how the surface deforms, but do not show the deformation of deeper structures such as tumors. Young scientists of the Karlsruhe Institute of Technology (KIT) have now developed a real-time capable computation method to adapt the virtual organ to the deformed surface profile.

The principle appears to be simple: Based on computer tomography image data, the scientists construct a virtual 3D model of the respective organ, including the tumor, prior to operation. During the operation, cameras scan the surface of the organ and generate a stiff profile mask. To this virtual mold, the 3D model then is to fit snuggly, like jelly to a given form. The Young Investigator Group of Dr. Stefanie Speidel analyzed this geometrical problem of shape adaptation from the physical perspective. "We model the surface profile as electrically negative and the volume model of the organ as electrically positive charged," Speidel explains. "Now, both attract each other and the elastic volume model slides into the immovable profile mask." The adapted 3D model then reveals to the surgeon how the tumor has moved with the deformation of the organ.

Simulations and experiments using a close-to-reality phantom liver have demonstrated that the electrostatic-elastic method even works when only parts of the deformed surface profile are available. This is the usual situation at the hospital. The human liver is surrounded by other organs and, hence, only partly visible by endoscopic cameras. "Only those structures that are clearly identified as parts of the liver by our system are assigned an electric charge," says Dr. Stefan Suwelack who, as part of Speidel's group, wrote his Ph.D. thesis on this subject. Problems only arise, if far less than half of the deformed surface is visible. To stabilize computation in such cases, the KIT researchers can use clear reference points, such as crossing vessels. Their method, however, in contrary to others does not rely on such references from the outset.

In addition, the model of the KIT researchers is more precise than conventional methods, because it also considers biomechanical factors of the liver, such as the elasticity of the tissue. So for instance, the phantom liver used by the scientists consists of two different silicones: A harder material for the capsule, i.e. the outer shell of the liver, and a softer material for the inner liver tissue.

As a result of their physical approach, the young scientists also succeeded in accelerating the computation process. As shape adaptation was described by electrostatic and elastic energies, they found a single mathematical formula. Using this formula, even conventional computers equipped with a single processing unit only work so quickly that the method is competitive. Contrary to conventional computation methods, however, the new method is also suited for parallel computers. Using such a computer, the Young Investigator Group now plans to model organ deformations stably in real time.

New tool for exploring cells in 3D created

The new software can generate editable models of mid-size biological structures such as this one of HIV. Credit: Image created by Graham Johnson and Ludovic Autin of The Scripps Research Institute

Researchers can now explore viruses, bacteria and components of the human body in more detail than ever before with software developed at The Scripps Research Institute (TSRI).

In a study published online ahead of print December 1 by the journal Nature Methods, the researchers demonstrated how the software, called cellPACK, can be used to model viruses such as HIV.

"We hope to ultimately increase scientists' ability to target any disease," said Art Olson, professor and Anderson Research Chair at TSRI who is senior author of the new study.
Putting cellPACK to the Test

The cellPACK software solves a major problem in structural biology. Although scientists have developed techniques to study relatively large structures, such as cells, and very small structures, such as proteins, it has been harder to visualize structures in the medium "mesoscale" range.

With cellPACK, researchers can quickly and efficiently process the data they've collected on smaller structures to assemble models in this mid-size range. Previously, researchers had to create these models by hand, which took weeks or months compared with just hours in cellPACK.

As a demonstration of the software's power, the authors of the new study created a model of HIV showing how outer "spike" proteins are distributed on the surface of the immature virus.

The new model put to the test a conclusion made by HIV researchers from super-resolution microscopic studies -- that the distribution of the spike proteins on the surface of the immature virus is random. But by using cellPACK to generate thousands of models, testing alternative hypotheses, the researchers found that the distribution was not random. "We demonstrated that their interpretation of the distribution did not match that hypothesis," said Olson.

A Team Effort

The cellPACK software began as the thesis project of a TSRI graduate student, Graham Johnson, now a QB3 faculty fellow at the University of California, San Francisco (UCSF) who continues to contribute to the project. Johnson had more 15 years' experience as a medical illustrator, and he wanted to create an easy way to visualize mesoscale structures. cellPACK is an expansion of Johnson's autoPACK software, which maps out the density of materials -- from concrete in a building to red blood cells in an artery.

The researchers see cellPACK as a community effort, and they have made the autoPACK and cellPACK software free and open source. Thousands of people have already downloaded the software from http://www.autopack.org.

"With the creation of cellPACK, Dr. Olson and his colleagues have addressed the challenge of integrating biological data from different sources and across multiple scales into virtual models that can simulate biologically relevant molecular interactions within a cell," said Veersamy Ravichandran, PhD, of the National Institutes of Health's National Institute of General Medical Sciences, which partially funded the research. "This user-friendly tool provides a new platform for data analysis and simulation in a collaborative manner between laboratories."

As new information comes in from the scientific community, researchers will tweak the software so it can model new shapes. "Making it open source makes it more powerful," said Olson. "The software right now is usable and very useful, but it's really a tool for the future."

Software models more detailed evolutionary networks from genetic data

Written By Unknown on Wednesday, January 7, 2015 | 11:07 PM

Phylogenetic networks depict the movement of genetic sequences from one species to another as a means of showing where horizontal gene transfer may have taken place. Software by scientists at Rice University aims to reveal far more about species’ evolutionary histories than traditional tree models are able to. Credit: Luay Nakhleh/Rice University
The tree has been an effective model of evolution for 150 years, but a Rice University computer scientist believes it's far too simple to illustrate the breadth of current knowledge.

Rice researcher Luay Nakhleh and his group have developed PhyloNet, an open-source software package that accounts for horizontal as well as vertical inheritance of genetic material among genomes. His "maximum likelihood" method, detailed this month in the Proceedings of the National Academy of Sciences, allows PhyloNet to infer network models that better describe the evolution of certain groups of species than do tree models.

"Inferring" in this case means analyzing genes to determine their evolutionary history with the highest probability -- the maximum likelihood -- of connections between species. Nakhleh and Rice colleague Christopher Jermaine recently won a $1.1 million National Science Foundation grant to analyze evolutionary patterns using Bayesian inference, a statistics-based technique to estimate probabilities based on a data set.

To build networks that account for all of the genetic connections between species, the software infers the probability of variations that phylogenetic trees can't illustrate, such as horizontal gene transfers. These transfers circumvent simple parent-to-offspring evolution and allow genetic variations to move from one species to another by means other than reproduction.

Biologists want to know when and how these transfers happened, but tree structures conceal such information. "When horizontal transfer occurs, as with the hybridization of two species, the tree model becomes inadequate to describe the evolutionary history, and networks that incorporate horizontal gene transfer become the more appropriate model," Nakhleh said.

Nakhleh's Java-based software accounts for incomplete lineage sorting, in which clues to gene evolution that don't match the established lineage of species appear in the genetic record.

"We are the first group to develop a general model that will allow biologists to estimate hybridization while accounting for all these complexities in evolution," Nakhleh said.
Most existing programs for phylogenetics (the study of evolutionary relationships) ignore such complexities. "They end up overestimating the amount of hybridization," Nakhleh said. 
"They start seeing lots of complexities in the data and say, 'Oh, it's complex here; it must be hybridization,' and end up inferring too much. Our method acknowledges that part of the complexity has nothing to do with hybridization; it has to do with other random processes that happened during evolution."

The Rice researchers used two data sets to test the new program. One, a computer-generated set of data that mimics a realistic model of evolution, allowed them to evaluate the accuracy of the program. The second involved multiple genomes of mice found across Europe and Asia. "There have been stories about mice hybridizing," Nakhleh said. "Now that we have the first method to allow for systematic analysis, we ran it on a very large amount of data from five mouse samples and we detected hybridization" -- most notably in the presence of a genetic signal from a mouse in Kazakhstan that found its way to mice in France and Germany, he said.

Nakhleh hopes evolutionary biologists will use PhyloNet to take a fresh look at the massive amount of genomic data collected over the past few decades. "The exciting thing for me about this is that biologists can now systematically go through lots of data they have generated and check to see if there has been hybridization."

Computational model: Ebola could infect more than 1.4 million people by end of January 2015

The Network Dynamics and Simulation Science Laboratory at the Virginia Bioinformatics Institute modeled the rate of infections and how interventions would affect the rate.
Credit: Source: CDC / Image courtesy of Virginia Tech
The Ebola epidemic could claim hundreds of thousands of lives and infect more than 1.4 million people by the end of January, according to a statistical forecast released this week by the U.S. Centers for Disease Control and Prevention.

The CDC forecast supports the drastically higher projections released earlier by a group of scientists, including epidemiologists with the Virginia Bioinformatics Institute, who modeled the Ebola spread as part of a National Institutes of Health-sponsored project called Midas, short for Models of Infectious Disease Agent Study.

The effort is also supported by the federal Defense Threat Reduction Agency.
Before the scientists released results, the outbreak in West Africa was expected to be under control in nine months with only about 20,000 total cases. But modeling showed 20,000 people could be infected in just a single month.

The predictions could change dramatically if public health efforts become effective, but based on the virus's current uncontrolled spread, numbers of people infected could skyrocket.

"If the disease keeps spreading as it has been we estimate there could be hundreds of thousands of cases by the end of the year in Liberia alone," said Bryan Lewis, a computational epidemiologist with the Network Dynamics and Simulation Science Laboratory at the Virginia Bioinformatics Institute.

Lewis and his fellow researchers use a combination of models to predict outcomes of the epidemic.

The agent-based models are adaptive, evolving as more information is fed into them to provide an accurate forecast.

Pharmaceutical intervention, which is still on the horizon, is proving less effective in the models than supportive care and personal protection equipment for health care workers.

"The work with Ebola is not an isolated event," said Christopher Barrett, the executive director of the institute. "This research is part of a decades-long effort largely funded by the Defense Threat Reduction Agency to build a global synthetic population that will allow us to ask questions about our world and ourselves that we have never been able to ask before, and to use those answers to prevent or quickly intervene during a crisis."

Barrett and other institute leaders updated U.S. Sen. Tim Kaine and Virginia Tech President Timothy Sands about the Network Dynamics and Simulation Science Lab's role in analyzing the Ebola outbreak at the Virginia Tech Research Center in Arlington on Tuesday morning. That afternoon in Blacksburg they briefed staff members from U.S. Sen. Mark Warner's office.

A university-level Research Institute of Virginia Tech, the Virginia Bioinformatics Institute was established in 2000 with an emphasis on informatics of complex interacting systems scaling the microbiome to the entire globe. It helps solve challenges posed to human health, security, and sustainability. Headquartered at the Blacksburg campus, the institute occupies 154,600 square feet in research facilities, including state-of-the-art core laboratory and high-performance computing facilities, as well as research offices in the Virginia Tech Research Center in Arlington, Virginia.

Source: Virginia Tech

Statistical model predicts performance of hybrid rice

Written By Unknown on Tuesday, January 6, 2015 | 11:12 PM

Long-grain rice
Genomic prediction, a new field of quantitative genetics, is a statistical approach to predicting the value of an economically important trait in a plant, such as yield or disease resistance. The method works if the trait is heritable, as many traits tend to be, and can be performed early in the life cycle of the plant, helping reduce costs.

Now a research team led by plant geneticists at the University of California, Riverside and Huazhong Agricultural University, China, has used the method to predict the performance of hybrid rice (for example, the yield, growth-rate and disease resistance). The new technology could potentially revolutionize hybrid breeding in agriculture.

The study, published online in the Proceedings of the National Academy of Sciences, is a pilot research project on rice. The technology can be easily extended, however, to other crops such as maize.

"Rice and maize are two main crops that depend on hybrid breeding," said Shizhong Xu, a professor of genetics in the UC Riverside Department of Botany and Plant Sciences, who co-led the research project. "If we can identify many high-performance hybrids in these crops and use these hybrids, we can substantially increase grain production to achieve global food security."

Genomic prediction uses genome-wide markers to predict future individuals or species. These markers are genes or DNA sequences with known locations on a chromosome. Genomic prediction differs from traditional predictions in that it skips the marker-detection step. The method simply uses all markers of the entire genome to predict a trait.

"Classical marker-assisted selection only uses markers that have large effects on the trait," Xu explained. "It ignores all markers with small effects. But many economically important traits are controlled by a large number of genes with small effects. Because the genomic prediction model captures all these small-effect genes, predictability is vastly improved."
Without genomic prediction, breeders must grow all possible crosses in the field to select the best cross (hybrid). For example, for 1000 inbred parents, the total number of crosses would be 499500.

"It is impossible to grow these many crosses in the field," Xu said. "However, with the genomic prediction technology, we can grow only, say, 500 crosses, then predict all the 499500 potential crosses, and select the best crosses based on the predicted values of these hybrids."

Xu noted that genomic prediction is particularly useful for predicting hybrids because hybrid DNA sequences are determined by their inbred parents.

"More cost-saving can be achieved because we do not need to measure the DNA sequences of the hybrids," he said. "Knowing the genotypes of the parents makes it possible to immediately know the genotype of the hybrid. Indeed, there is no need to measure the genotype of the hybrid. It is fully predicted by the model."

When the researchers incorporated "dominance" and "epistasis" into their prediction model, they found that predictability was improved. In genetics, dominance describes the joint action of two different alleles (copies) of a gene. For example, if one copy of a gene has a value of 1 and the other copy has a value of 2, the joint effect of the two alleles may be 4, indicating that the two alleles are not additive. In this case, dominance has occurred. Epistasis refers to any type of gene-gene interaction.

"By incorporating dominance and epistasis, we took into account all available information for prediction," Xu said. "It led to a more accurate prediction of a trait value."

Genomic prediction can be used to predict heritable human diseases. For example, many cancers are heritable and genome prediction can be performed to predict disease risk for a person.

Xu was joined in the research by Qifa Zhang and his student Dan Zhu at Huazhong Agricultural University, China.

Next the research team, led by Xu and Zhang, will design a field experiment to perform hybrid prediction in rice.

Dance choreography improves girls' computational skills

Written By Unknown on Monday, January 5, 2015 | 11:04 PM

Report lead author Shaundra Daily performs alongside her virtual character. Daily designs innovative new technologies that bring together sensors and machine learning with theories of human learning. Credit: Clemson University
Clemson researchers find that blending movement and computer programming supports girls in building computational thinking skills, according to an ongoing study funded by the National Science Foundation and emerging technology report published in journal Technology, Knowledge and Learning.

Even with increasing demands for computationally savvy workers, there is a lack of representation among women in science, technology, engineering and mathematics fields (STEM), the researchers say.

"We want more diverse faces around the table, helping to come up with technological solutions to societal issues," said Shaundra Daily, lead author on the report and assistant professor of computing at Clemson. "So we're working with girls to create more pathways to support their participation."

Virtual Environment Interactions (VEnvI) is software and curriculum for blending movement and programming, which offers a novel and embodied strategy of engaging fifth- and sixth-grade girls in computational thinking.

"We want to understand how body syntonicity might enable young learners to bootstrap their intuitive knowledge in order to program a three-dimensional character to perform movements," said Alison Leonard, report co-author and assistant professor of education at Clemson.

In the process of developing this emerging technology, the researchers conduct user-centered design research for creating choreography and the social context for a virtual character through which girls can be introduced to alternative applications in computing.

"We adopt the view that computational thinking is a set of concepts, practices and perspectives that draw upon the world of computing and applicable in many STEM fields," Daily said.

Students met with instructors and learned basic curriculum involving the elements of dance, choreography and Alice, an existing educational software that teaches students computer programming in a three-dimensional environment.

The researchers utilize movement choreography as both an engaging and a parallel context for introducing computational thinking. Compositional strategies in the choreographic process of ordering and reordering movement sequences also mirror computational practices of reusing and remixing.

"Executing one bit of code or movement one after the other exists in both programming and choreography. Likewise, loops or repeating a set of steps, also occur in both contexts," Leonard said.

The students moved and created pieces for their virtual characters to perform, bringing about connections between computational thinking and what their bodies are doing.
The findings indicate the active presentation of concepts and future scalability of their virtual environment VEnvI that will add to the rich landscape of emerging technologies geared toward more inclusive strategies to engage girls in computational thinking.

The researchers are designing the first control algorithm that links concepts from computational thinking to animation algorithms, thus creating and evaluating new animation algorithms working to ensure the quality of the resulting choreography.

This emerging technology has the potential to widen the scope of current technologies that seek to cultivate computational thinking for diverse designers, users and audiences, according to the researchers.

Source: Clemson University

Astronomers bring the third dimension to a doomed star's outburst

Written By Unknown on Wednesday, December 31, 2014 | 12:57 PM

A new shape model of the Homunculus Nebula reveals protrusions, trenches, holes and irregularities in its molecular hydrogen emission. The protrusions appear near a dust skirt seen at the nebula's center in visible light (inset) but not found in this study, so they constitute different structures. Credit: NASA Goddard (inset: NASA, ESA, Hubble SM4 ERO Team)
In the middle of the 19th century, the massive binary system Eta Carinae underwent an eruption that ejected at least 10 times the sun's mass and made it the second-brightest star in the sky. Now, a team of astronomers has used extensive new observations to create the first high-resolution 3-D model of the expanding cloud produced by this outburst.

"Our model indicates that this vast shell of gas and dust has a more complex origin than is generally assumed," said Thomas Madura, a NASA Postdoctoral Program fellow at NASA's Goddard Space Flight Center in Greenbelt, Maryland, and a member of the study team. "For the first time, we see evidence suggesting that intense interactions between the stars in the central binary played a significant role in sculpting the nebula we see today."

Eta Carinae lies about 7,500 light-years away in the southern constellation of Carina and is one of the most massive binary systems astronomers can study in detail. The smaller star is about 30 times the mass of the sun and may be as much as a million times more luminous. The primary star contains about 90 solar masses and emits 5 million times the sun's energy output. Both stars are fated to end their lives in spectacular supernova explosions.

Between 1838 and 1845, Eta Carinae underwent a period of unusual variability during which it briefly outshone Canopus, normally the second-brightest star. As a part of this event, which astronomers call the Great Eruption, a gaseous shell containing at least 10 and perhaps as much as 40 times the sun's mass was shot into space. This material forms a twin-lobed dust-filled cloud known as the Homunculus Nebula, which is now about a light-year long and continues to expand at more than 1.3 million mph (2.1 million km/h).

Using the European Southern Observatory's Very Large Telescope and its X-Shooter spectrograph over two nights in March 2012, the team imaged near-infrared, visible and ultraviolet wavelengths along 92 separate swaths across the nebula, making the most complete spectral map to date. The researchers have used the spatial and velocity information provided by this data to create the first high-resolution, fully 3-D model of the Homunculus Nebula. The new model contains none of the assumptions about the cloud's symmetry found in previous studies.

The shape model, which is now published by the journal Monthly Notices of the Royal Astronomical Society, was developed using only a single emission line of near-infrared light emitted by molecular hydrogen gas. The characteristic 2.12-micron light shifts in wavelength slightly depending on the speed and direction of the expanding gas, allowing the team to probe even dust-obscured portions of the Homunculus that face away from Earth.

"Our next step was to process all of this using 3-D modeling software I developed in collaboration with Nico Koning from the University of Calgary in Canada. The program is simply called 'Shape,' and it analyzes and models the three-dimensional motions and structure of nebulae in a way that can be compared directly with observations," said lead researcher Wolfgang Steffen, an astrophysicist at the Ensenada campus of the National Autonomous University of Mexico.

The new shape model confirms several features identified by previous studies, including pronounced holes located at the ends of each lobe and the absence of any extended molecular hydrogen emission from a dust skirt apparent in visible light near the center of the nebula. New features include curious arm-like protrusions emanating from each lobe near the dust skirt; vast, deep trenches curving along each lobe; and irregular divots on the side facing away from Earth.

"One of the questions we set out to answer with this study is whether the Homunculus contains any imprint of the star's binary nature, since previous efforts to explain its shape have assumed that both lobes were more or less identical and symmetric around their long axis," explained team member Jose Groh, an astronomer at Geneva University in Switzerland. "The new features strongly suggest that interactions between Eta Carinae's stars helped mold the Homunculus."

Every 5.5 years, when their orbits carry them to their closest approach, called periastron, the immense and brilliant stars of Eta Carinae are only as far apart as the average distance between Mars and the sun. Both stars possess powerful gaseous outflows called stellar winds, which constantly interact but do so most dramatically during periastron, when the faster wind from the smaller star carves a tunnel through the denser wind of its companion. The opening angle of this cavity closely matches the length of the trenches (130 degrees) and the angle between the arm-like protrusions (110 degrees), indicating that the Homunculus likely continues to carry an impression from a periastron interaction around the time of the Great Eruption.

Once the researchers had developed their Homunculus model, they took things one step further. They converted it to a format that can be used by 3-D printers and made the file available along with the published paper.

"Now anyone with access to a 3-D printer can produce their own version of this incredible object," said Goddard astrophysicist Theodore Gull, who is also a co-author of the paper. 

"While 3-D-printed models will make a terrific visualization tool for anyone interested in astronomy, I see them as particularly valuable for the blind, who now will be able to compare embossed astronomical images with a scientifically accurate representation of the real thing."

Source: NASA

The Earthquake simulation tops one petaflop mark

Written By Unknown on Wednesday, October 29, 2014 | 3:18 AM

Visualization of vibrations inside the Merapi volcano. Credit: Alex Breuer/Christian Pelties
A team of computer scientists, mathematicians and geophysicists at Technische Universitaet Muenchen (TUM) and Ludwig-Maximillians Universitaet Muenchen (LMU) have -- with the support of the Leibniz Supercomputing Center of the Bavarian Academy of Sciences and Humanities (LRZ) -- optimized the SeisSol earthquake simulation software on the SuperMUC high performance computer at the LRZ to push its performance beyond the "magical" one petaflop/s mark -- one quadrillion floating point operations per second.

Geophysicists use the SeisSol earthquake simulation software to investigate rupture processes and seismic waves beneath Earth's surface. Their goal is to simulate earthquakes as accurately as possible to be better prepared for future events and to better understand the fundamental underlying mechanisms. However, the calculations involved in this kind of simulation are so complex that they push even super computers to their limits.

In a collaborative effort, the workgroups led by Dr. Christian Pelties at the Department of Geo and Environmental Sciences at LMU and Professor Michael Bader at the Department of Informatics at TUM have optimized the SeisSol program for the parallel architecture of the Garching supercomputer "SuperMUC," thereby speeding up calculations by a factor of five.

Using a virtual experiment they achieved a new record on the SuperMUC: To simulate the vibrations inside the geometrically complex Merapi volcano on the island of Java, the supercomputer executed 1.09 quadrillion floating point operations per second. SeisSol maintained this unusually high performance level throughout the entire three hour simulation run using all of SuperMUC's 147,456 processor cores.

Complete parallelization
This was possible only following the extensive optimization and the complete parallelization of the 70,000 lines of SeisSol code, allowing a peak performance of up to 1.42 petaflops. This corresponds to 44.5 percent of Super MUC's theoretically available capacity, making SeisSol one of the most efficient simulation programs of its kind worldwide.

"Thanks to the extreme performance now achievable, we can run five times as many models or models that are five times as large to achieve significantly more accurate results. Our simulations are thus inching ever closer to reality," says the geophysicist Dr. Christian Pelties. "This will allow us to better understand many fundamental mechanisms of earthquakes and hopefully be better prepared for future events."

The next steps are earthquake simulations that include rupture processes on the meter scale as well as the resultant destructive seismic waves that propagate across hundreds of kilometers. The results will improve the understanding of earthquakes and allow a better assessment of potential future events.
"Speeding up the simulation software by a factor of five is not only an important step for geophysical research," says Professor Michael Bader of the Department of Informatics at TUM. "We are, at the same time, preparing the applied methodologies and software packages for the next generation of supercomputers that will routinely host the respective simulations for diverse geoscience applications."
Besides Michael Bader and Christian Pelties also Alexander Breuer, Dr. Alexander Heinecke and Sebastian Rettenberger (TUM) as well as Dr. Alice Agnes Gabriel and Stefan Wenk (LMU) worked on the project. In June the results will be presented at the International Supercomputing Conference in Leipzig (ISC'14, Leipzig, 22-June 26, 2014; title: Sustained Petascale Performance of Seismic Simulation with SeisSol on SuperMUC)

Source: Technische Universitaet Muenchen
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. The planet wall - All Rights Reserved
Template Created by Easy Blogging Published by Mas Template
Proudly powered by Blogger