Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts
Moore's Law states that every 18 months a CPU's transistors will shrink in size by a factor of two. This means every 18 months you can stuff twice as many transistors onto a CPU, making it twice as fast. This law has been in effect since the late 1950's, and as technology advances, not only have the CPU's been able to hold more transistors, but they have been getting smaller in size as well. Intel recently released plans for a 14 nanometer CPU this September. 14 nanometers is only about 60 times larger than a silicon atom, which means you can fit billions of them on a chip, over a million times more than you could in the 1950s.
Throughout the 20th century, Moore's Law was the software developer's best friend because, as a result of the increasing CPU speed, every 18 months software became twice as fast without any extra effort on the developer's part. However, in 2002, limitations in a CPU's circuitry demanded a new architecture if Moore's Law were to hold true. Out of these limitations the multicore processor was born. This essentially means that before 2002 you had one CPU in your computer and after 2002 you have multiple CPUs -- or cores -- in your computer. Each individual core has not gotten much faster since 2002, but the amount of them is doubling every 18 months, which means Moore's Law is still holding true.
In order for today's developers to benefit from the speed increases Moore's Law offers, they need to take advantage of multicore processors. But it's not that simple. For the past 50 years developers have written programs to run on a single CPU. Most of the tools, programming languages, college curriculums and research have followed suit. A program designed using all of that knowledge and experience will only run on a single core of a multicore CPU, and those cores aren't getting faster. Now developers have to rethink their programming approach and coordinate -- or parallelize -- programs across cores--no easy feat.
An apt metaphor for describing the difficulties of multicore programming is the game "telephone." For those of you who aren't familiar with this childhood game, the basic premise is that several children stand in a line and the first child whispers a word or phrase to the next, who whispers the same word or phrase to the next child, and so on. When the last child hears the word or phrase, he says it out loud so everyone can hear, and it is never the same as it was at the beginning of the chain. I remember once starting one of these infamous chains with the phrase "I think Jenny's cute" only to have it end with "Jenny smells like monkey poop"--clearly not the desired outcome. Multicore programming is like this but with millions of kids whispering millions of words, a million times a second.
Today's most popular programming languages (C, C++, C#, Objective-C, Java, Python, Ruby) are ill-equipped to stand up to the challenge of multicore programming. While developers can still use these languages for single CPU programming, they face the drawback of leaving the speed of the program stuck in 2002. And unfortunately, this is what is happening with much of the code being written today.
However, within select academic communities, programming languages have been developed that make multicore programming far less daunting. These languages are known as "functional programming languages" and while they don't completely solve the difficulties of multicore programming, they certainly make them less daunting. To go back to the earlier metaphor, playing "telephone" with a functional programming language is more like playing with adults instead of children. There is still room for "monkey poop" moments, but they are less likely if adults have the right intent.
My favorite of these functional programming languages is called Scala. Scala is a language that runs on the Java Virtual Machine, which is great because it means that all of the knowledge a Java programmer has built up over their lifetime is 100% applicable to the Scala programming done in the future. It's also flexible enough so that a Java developer can ease into the language. Scala describes itself as a "object-oriented-functional-hybrid language." Java programmers have been doing object-oriented programming since its inception, so they simply need to adapt to a few of the functional programming paradigms.
Scala has been dramatically gaining in popularity over the past decade. Companies like LinkedIn, Twitter, Intel, Foursquare, and the Huffington Post are using it in production. Large communities have sprouted up all over the world through sites like meetup.com (also using Scala). And it's becoming easier to learn since free courses on the language are being offered through coursera.org.

Now it's easy for me to sing the praises of Scala, as an avid user and the organizer of both the New York Scala Meetup and the Boulder Scala Meetup. So I think it's only fair that I note there are a few other languages out there making programming on multicore processors easier. Some popular ones are Clojure, F#, and Go. These languages all have their upsides and downsides, as any language does, but because of the difficulty Moore's Law has posed to developers, it's in their best interest to start learning one.


Similar to using Python or Java to write code for a computer, chemists soon could be able use a structured set of instructions to “program” how DNA molecules interact in a test tube or a cell.

A team led by the University of Washington [UW] has developed a programming language for chemistry that it hopes will streamline efforts to design a network that can guide the behavior of chemical-reaction mixtures in the same way that embedded electronic controllers guide cars, robots and other devices. In medicine, such networks could serve as “smart” drug deliverers or disease detectors at the cellular level.

Chemists and educators teach and use chemical reaction networks, a century-old language of equations that describes how mixtures of chemicals behave. The UW engineers take this language a step further and use it to write programs that direct the movement of tailor-made molecules. 

“We start from an abstract, mathematical description of a chemical system, and then use DNA to build the molecules that realize the desired dynamics,” said corresponding author Georg Seelig, a UW assistant professor of electrical engineering and of computer science and engineering. “The vision is that eventually, you can use this technology to build general-purpose tools.” 

Currently, when a biologist or chemist makes a certain type of molecular network, the engineering process is complex, cumbersome and hard to repurpose for building other systems. The UW engineers wanted to create a framework that gives scientists more flexibility. Seelig likens this new approach to programming languages that tell a computer what to do. 

“I think this is appealing because it allows you to solve more than one problem,” Seelig said. “If you want a computer to do something else, you just reprogram it. This project is very similar in that we can tell chemistry what to do.” 

An example chemical program. Yan Liang, L2XY2.com An example of a chemical program. Here, A, B and C are different chemical species. Humans and other organisms already have complex networks of nano-sized molecules that help to regulate cells and keep the body in check. Scientists now are finding ways to design synthetic systems that behave like biological ones with the hope that synthetic molecules could support the body’s natural functions. 

To that end, a system is needed to create synthetic DNA molecules that vary according to their specific functions. The new approach isn’t ready to be applied in the medical field, but future uses could include using this framework to make molecules that self-assemble within cells and serve as “smart” sensors. These could be embedded in a cell, then programmed to detect abnormalities and respond as needed, perhaps by delivering drugs directly to those cells.

Seelig and colleague Eric Klavins, a University of Washington associate professor of electrical engineering, recently received $2 million from the National Science Foundation as part of a national initiative to boost research in molecular programming. The new language will be used to support that larger initiative, Seelig said. 

Co-authors of the paper are Yuan-Jyue Chen, a UW doctoral student in electrical engineering; David Soloveichik of the University of California, San Francisco; Niranjan Srinivas at the California Institute of Technology; and Neil Dalchau, Andrew Phillips and Luca Cardelli of Microsoft Research. 

The research was funded by the National Science Foundation, the Burroughs Wellcome Fund and the National Centers for Systems Biology. 

An article from University of Washington website 
for more details about this article click Here to go.  

Translate

Recent News