Find your own career path

IBM researcher Heike Riel wins SVIN Award 2012.

This article was translated from a Q&A conducted by SVIN managing director Brigitte Manz-Brunner. See also the external press release.

Zurich physicist Heike Riel
The Swiss Association of Women in Engineering (SVIN) honored Heike Riel, physicist at IBM Research – Zurich with its 2012 "Technical or Scientific Innovation" award. Heike's work focuses on nanoscale electronics of new materials and nanoelectronic devices for future energy-efficient computers.

Women still make up only a small percentage of professionals in technology and the natural sciences. Why did you choose this field?

I always had a flair for mathematics from my earliest school days. It's one of my passions that I was determined to pursue, even though it's generally assumed that girls have less aptitude for math than boys do.

I was fortunate to have the freedom to study what interested me the most, so I chose physics. Physics is fascinating because it's about applying mathematics to better understand our world and to find innovative technical solutions to the challenges we face.

Did you encounter resistance as you pursued your career?

I never really encountered much resistance, to tell the truth. Of course there are always those people who make dumb remarks about anything unusual. As a female physicist, one is a bit out of the ordinary and sometimes people don't know how to gauge you. There's only one solution: persuade with performance.

Were you supported in your choice of careers? Who are your role models?

Oh yes, I enjoyed a lot of support. It's often the little things that really help, such as receiving important information or good advice in order to make the right decisions along the way — and the freedom to try something out or to find your own path.

I would say that my father was my strongest role model throughout my childhood. I originally wanted to follow in his footsteps and become an engineer before choosing physics.

I am grateful to have met so many very interesting and impressive people throughout my career from whom I was able to learn.

Please share one or two memorable experiences from your career so far.

There have been so many wonderful experiences! One that was decisive for my career was a student internship at the Hewlett-Packard Laboratories in Palo Alto, California, which was my first exposure to working in an industrial research lab. That's the reason why, after my internship, I applied to the IBM Research – Zurich Lab, where I've been ever since.

Perhaps an interesting side note is that HP accepted my internship application not because of my academic record — they have plenty of top-notch applicants from right around the corner at Stanford University — but because I was the only applicant who had also learned a trade. (Before studying physics I completed an apprenticeship as a cabinet maker.)

What advice would you give to young women interested in a career in technology or the natural sciences?

In my opinion, the most important thing is to find your own path. Don't let others discourage you with clichés such as "girls aren't good at math" and other nonsense. This requires a certain measure of self-confidence that you've got to develop along the way to this career. Mentors can be a big help in this respect.

Talk about being a mentor, especially about your activities within women's networks such as SVIN.

In my function as a group manager and mentor, I support both men and women. I try to stay active on all levels of career mentoring (here at IBM, and out in the community). It starts by supporting someone interested in the natural sciences. For example, I have participated in locally organized TechDays and have been invited to speak at schools.

I try to give young people insight into my research by giving talks and lab tours. This outreach has enabled me to motivate several young women to pursue their Master's or PhD degrees. As a mentor, my job is to provide support in many ways — for example, by introducing a mentee to someone who has a similar background.

What is your opinion of SVIN's push "to foster society's understanding of practical technological applications by strengthening the ability of our school systems to encourage students to use technology in a sustainable, ethical and socially compatible manner"?

I agree with this statement whole-heartedly. People need a certain level of basic technical knowledge in order to be able to manage technology in their everyday lives.

The basics of the technology we use everyday is grounded in mathematics and physics. These subjects are the key to understanding the world around us because they train you to think logically. It's never too early to learn that.

What does the SVIN award mean to you?

I'm honored to have received it in the category of innovation. This award is not only a recognition of my work in science and technology but is also a real motivator to continue down the path I've chosen. It also encourages me to continue fostering more young women's interest in pursuing careers in the natural sciences or engineering.


New destinations for mobile

A Q&A with Gal Shachor, newly appointed Distinguished Engineer at IBM Research – Haifa.

Gal is one of the pioneering founders of WebSphere. His current work focuses on architecture and innovation for mobile computing platforms.

How did the industry – and IBM – make the move to mobile computing and how were you involved?

Gal Shachor: In 1996, contrary to the advice I got from many people, I decided to create a servlet container called "servlet express," designed to extend the capabilities of web servers that host applications. This was one of the first servlet containers in the market and later became a vital part of the WebSphere Application Server. I'm fond of this project because it was a tremendous innovation at that time – enabling connection and extension points to all the major web servers that existed.

But that was more than 15 years ago, when the Internet started to change from a technology that was helpful but on the sidelines, to a critical part of how we work and do business (and we delivered WebSphere).

Now – just in the last couple of years – we are seeing a similar change starting in the mobile arena. Similar to what happened in the late 1990s on the web, clients are now asking us for solutions that can help them manage mobile applications in the enterprise; more easily develop new applications; and provide security for all these apps.

How did you personally move to working in mobile?

GS: Before getting into mobile, I worked on form-based applications and user interfaces that make it easy for business people and analysts to build their own web applications. One after another, clients started asking what we're doing for mobile applications. With so many requests, it was clear the market was evolving and clients badly needed a way to move their processes over to mobile.

What challenges is the industry facing with the use of mobile in the workplace?

GS: With so many people now using their personal mobile devices to help with work tasks, enterprises need tighter management capabilities for the applications being used. And the IBM mobile platform provides just that.

As part of our work on the mobile platform, IBM provides a shell (software interface) that businesses can use to add capabilities that force the user to upgrade, block threats, register applications, and provide security. The IT department can add restrictions, run updates, and manage notifications. We also provide the enterprise with tools to more easily develop mobile applications that meet their new emerging business needs.

What kind of mobile opportunities exist in the enterprise space?

GS: We have two possible scenarios: business to employee (B2E) and business to consumer (B2C). For company employees accessing corporate data, their mobile devices need to be managed and applications need to be vended from a corporate app store. This is a large market, especially with new apps being developed first for mobile and then for the desktop.

The second direction is the business to consumer (B2C) opportunity. For example, an insurance company might give their customers a mobile app they can use to fill out data at the scene of a car accident. Both scenarios represent huge market opportunities due to the tremendous flood of mobile devices into our everyday lives.

Note: Gal Shachor is a Distinguished Engineer and Senior Technical Staff Member at IBM Research Haifa. He published the book JSP Tag Libraries, along with numerous patents and papers. He is also the recipient of several IBM corporate awards for his work


Optimizing underwater oil exploration

Applied mathematicians from IBM Research are working with the Norwegian University of Science and Technology (NTNU) to maximize oil exploration in the North Sea.

Oil shapes the quality of daily life, the world over. And nearly everything associated with it – especially finding it and getting it out of the ground – poses an international challenge. To solve that challenge, applied mathematicians at IBM Research are looking for ways to help oil companies find oil faster and less expensively.

Research mathematician Andrew Conn (pictured) launched the Reservoir Management and Production Optimization project last year to develop algorithms to optimize petroleum production network simulator parameters, using proxy models and structural constraints. The project will also make its open code available to developers through IBM's Open Collaborative Research (OCR) program.

Launching Reservoir Management and Production Optimization

Conn, an advanced analytics and optimization researcher, was asked to join the technical committee of the Norwegian University of Science and Technology's Center for Integrated Operations while working with Norway's Statoil in 2006. He took advantage of the multiple in-person meetings each year to strengthen IBM's relationship with NTNU. This included working on the OCR project with IBM's summer interns from the university. They focused on optimizing the extraction of oil and gas from the subsea. The goal: create models and simulate numerous scenarios to locate and manage petroleum in the subsea more rapidly and efficiently than currently possible.

Now in its second year, Conn's OCR project continues to develop optimization applied to simulations and models that will help energy companies maximize the amount of oil they can get out of a reservoir basin. And as with any underwater exploration, the search is complicated by the difficulty of getting a clear picture of the tremendous geological diversity in the reservoir basin.

Improving the search for underwater oil

The OCR team has relied on various optimization techniques in looking for underwater oil supplies. Such techniques typically use either line search or trust region methods.

Using the line search method, researchers determine where they will begin and in what direction they will go and iterate. Using the trust region method, researchers create a model which is compared with the actual function to be optimized. If there is reasonable agreement between the model and the behavior of the actual function, then the researchers will expand the region of applicability of their model to include a wider area of potential exploration. Conversely, if the actual objective does not behave like the model, the researchers downsize the region of exploration. Iterating on this paradigm, the goal is to home in on the solution in a region for which one has created a sufficiently accurate model.

Diagram of undersea drilling platform

The team is also working at simulating the entire field that they are trying to optimize. The model for these simulations includes the various components (wells, pipelines, manifolds, separators) that make up an exploration application. Simulations might be done for pressure drops on the pipelines, for how wells behave, and other real-world oil extraction scenarios. Normally for such problems, some derivatives are unavailable – and both discrete and continuous variables are involved – which have significant consequences for the optimization methods that can be used.

Conn and his colleagues at the Center for Integrated Operations compared their optimization approach with NOMAD Black Box optimization software – a generally well-considered package for optimization without derivatives. Where Conn's algorithm required four iterations to determine an appropriate approximate solution, NOMAD's required 351. Where Conn's team needed to render 82 well simulations, NOMAD needed 23,402. Likewise, Conn's team rendered 1,662 pipeline simulations; whereas NOMAD needed 15,602. Furthermore, the resulting IBM-NTNU result had a greater than 10 percent improvement over NOMAD’s approximate optimal solution.

“You would be surprised at the number of people who don’t know that IBM is engaged in this kind of work,” Conn said. “They can’t believe that we are in the business of helping oil companies save money by optimizing their exploration processes.

“Through the OCR, we are getting companies inside and outside the petroleum industry to understand that if they can improve their models, combined with the optimization, by even one percent, they are going to save millions and millions of dollars.”

This is just one of IBM Research's several upstream petroleum projects. And these techniques can be broadly applied to other industries where simulation and optimization with both discrete and continuous variables are required.


Project dreams of helping doctors find a cure for Lou Gehrig’s disease, other challenges

Biologists push theory to experiment with the wisdom of crowds.

Seven years ago, IBM Research scientist Dr. Gustavo Stolovitzky’s team was looking for a way to better-understand the accuracy of the biological results yielded by the network reconstruction algorithms they were developing at IBM. In other words, how could Stolovitzky improve the evaluation of their reverse engineering efforts to better understand and maybe help to solve biomedical challenges such as cancer?

More generally, all computational biologists want a clear-cut evaluation of the models they use to analyze and eventually represent biological systems. Are their techniques working? How do their techniques compare with other techniques?

Stolovitzky and collaborator Dr. Andrea Califano, now the director of Columbia University’s Initiative in Systems Biology, decided to organize DREAM – the Dialogue on Reverse Engineering Assessment and Methods – Project to crowd source the analysis of high throughput data (now so pervasive in biological research) to address important challenges in biology.

Now taking submissions for DREAM7 challenges, Stolovitzky and colleague Dr. Pablo Meyer Rojas* discuss the goals of the project and how to submit responses to this year’s challenges.

How did the DREAM Project start?

Gustavo Stolovitzky: The explosion of genomics has created the need to organize and structure the data produced to generate a coherent biological picture. DREAM was created in order to foster concerted efforts by computational and experimental biologists to understand the limitations of the models built from these high-throughput data sets.

While I conceived the DREAM project as a way to understand the accuracy of the biological results yielded by the network reconstruction algorithms (reverse engineering) we were developing at IBM, it captured a need in the community that was, so to speak, up in the air.

My long-time collaborator (and former IBMer) Andrea Califano and I organized the first meeting with the New York Academy of Sciences in 2006. After that, the project was launched as a series of annual challenges that culminate in the DREAM conference.

What is the project's overall goal?

GS: In the context of the current avalanche of genomic data, DREAM's goal is to objectively assess and enhance the quality of data-based modeling of biological systems. For example, if we know what the results of a particular analysis should be (because we have what we call the “ground truth” contained in unpublished information, not yet available to the community at large) then we can test the community to assess how close to the ground truth the results are.

This approach has many useful outcomes.  
    • It can find the best analytical method for a given problem, because all the methods are pitted against each other on the same data set, and under the same evaluation scheme. 
    • It enables a dialogue in the community about why an analytical tool may yield good or bad results.  
    • It fosters a synergy between theoretical, computational and experimental scientists – all of whom look at the same data from different perspectives to achieve the great goal of understanding biology. 
    • It can help garner evidence for or against a hypothesis because, if nobody in the community can solve a given problem predicated on a hypothesis, then the underlying hypothesis may be wrong. Conversely, if at least one member of the community solves it, then the hypothesis can be considered verified. 
    • The outcomes of DREAM have the potential to complement peer-reviewed research, and increase the confidence of the scientific community on biological models and algorithm reliability.
DREAM states that its “main objective is to catalyze the interaction between experiment and theory in the area of cellular network inference and quantitative model building in systems biology.” Please elaborate on this.

Pablo Meyer Rojas: The goal of systems biology is to understand the biological whole as more than the sum of the individual parts. In order to do this, we need to build comprehensive context-specific models of biological processes at the cellular or organism level, based on data inherent to the system under study.

We say that the models need to be quantitative because the ultimate goal of systems biology is to describe the behavior of biological systems based on precise measurements, and predict the response of those systems to perturbations, such as disturbances caused by disease.

These models are based on the construction of cell-maps from data describing the interactions of DNA, mRNA, proteins, drugs, etc. Networks are a succinct way to represent these interactions, and are the scaffolding from which to build the mathematical models that quantitatively implement our understanding of the biological realm.

How are the challenges chosen?

GS: It has been said that a wise man's question contains half the answer.

With DREAM we try to pose relevant and important questions (the project’s challenges) about biological problems, whose answers should be found through the analysis of complex biological data. For example, how can we predict the survival of a cancer patient based on genomics data extracted from the patient's tumor? Or, what is the therapeutic effect of a drug on a cell, given that we know the effect of the same drug on other cells?

Another important consideration is that we need to know the answer of a challenge to assess the predictions. Therefore the availability of unpublished data that can be used as ground truth to evaluate the submissions – and the willingness of the data producers to share their unpublished data – is essential.

Why use crowd sourcing?

PMR: In order to tap the wisdom of crowds, we need the crowds! Crowd sourcing is an effective way to reach out to people from a diverse set of communities as participants, to get a spectrum-wide set of methods for solving a problem.

Suppose you have a tough question for which you need an answer. You may not know the answer, and your immediate friends may not know the answer, either. But what if you could ask that same question to all your neighborhood, town, province, country or planet?

It is a bit like the “ask the audience” life-line in the game show “Who Wants to be a Millionaire”.

It is possible that someone who has the expertise happens to know the answer. But to find that person we need to tap the crowds. In the case of systems biology, crowd sourcing the solution of a challenge allows us to search among many different methodologies used to analyze the bio-data, and find the one that produces the most accurate predictions. The more participants we get, the more likely it is that if a solution exists, we will find it.

How is a "best answer" for each challenge chosen, and who chooses?

GS: Before the challenges are made public, individuals involved in the organization of a challenge (including people that generated the data) get together and decide on a scoring method based on few different metrics. Participants are then informed of how their entries will be evaluated.

Once the challenge is finished, predictions are evaluated and scores are published, along with all of the scoring methods. Only the names of the best performers are revealed, but each participant is informed of his or her own score.

Something interesting we discovered is that when we aggregate the prediction of the community, the resulting aggregate solution tends to be the best answer. This gives new meaning to the concept of the wisdom of the crowds.

How will these “best responses” be used? Do you have a past example to share?

PMR: The algorithms of the best performers can be used to generate new predictions that will be tested experimentally. For example, in DREAM5 a challenge asked for predictions to determine the affinity of synthetically generated peptides (peptides are small pieces of proteins) to antibodies (proteins that rid the body of pathogens). The algorithms from the best performers were then used to generate and test a second round of peptides that were predicted to work better together.

In another DREAM5 challenge, a community prediction of the gene regulatory network of Staphylococcus Aureus was created. It could be used to help find new antibiotics against this serious bacterial pathogen that can cause infections such as MRSA (methicillin-resistant Staphylococcus aureus).

Who can participate, and how?

GS: Anyone is invited to participate. The more diverse the community of participants, the more chance we have in finding an innovative methodology. Participants need to register here, and can choose any (or all) of the four challenges.

This year’s challenges are what we call translational, in the sense that we use basic research that can be translated into medically relevant knowledge, including areas such as breast cancer and Amyotrophic lateral sclerosis, commonly referred to as ALS or Lou Gehrig’s disease.

We also have a number of incentives for challenge participation. For example, in the prediction of progression of Lou Gehrig’s disease, the non-profit Prize4Life will award $25,000 to the best performing submission.

For all challenges, an expense-paid speaking invitation to the DREAM conference (Nov 12-16 in San Francisco) will be provided to the best performer. This year we are also partnering with the journals Open Network Biology, Science Translational Medicine and Nature Biotechnology for publication of the best performing results.

* -- Besides Stolovitzky and Meyer, IBM Research scientists Raquel Norel and Erhan Bilal are working on the DREAM Project.


Green chemistry and the quest for environmentally sustainable plastics

White House recognizes IBM Research scientist for green chemistry breakthrough.

While exploring metal-free materials and processes for the thin polymeric films used in microprocessors, IBM researcher Jim Hedrick and Stanford University professor Dr. Robert Waymouth discovered that these chip development techniques can also be used in organocatalysis – the use of organic materials instead of metals in order to increase the rate of a chemical reaction.  The goal was to create highly recyclable, even biodegradable, plastics to be used in a myriad of ways, such as medication packaging and water desalination.

What is Green Chemistry

Green chemistry is the application of chemicals and chemical processes to reduce or eliminate the negative environmental impacts of pollution.

The use and production of these chemicals may involve reducing waste products in industrial settings; replacing tin-based components in cosmetics, nylons and polyesters with non-toxic polymers; and making plastic recycling more efficient.
For pioneering the application of organocatalysis, Hedrick, who works in IBM Research’s Advanced Organic Materials department in Almaden, and Waymouth, a chemistry professor, are being recognized with the Environmental Protection Agency (EPA) Presidential Green Chemistry Award. This green chemistry discovery and approach could lead to the creation of biodegradable materials made from renewable resources.

Green chemistry pioneers

Motivated by a desire to generate new classes of metal-free plastics for microelectronic applications, Hedrick and Waymouth first focused their efforts on ring-opening polymerization – a strategy dominated by metal oxide or metal hydroxide catalysts that allows larger polymer chains to form. They have shown that these organic catalysts not only exhibit activities that rival the most active metal-based catalysts, but by virtue of their novel linking mechanisms, provide access to polymer architectures that are difficult to access by conventional approaches.

Plastics are ubiquitous and useful modern materials, but their widespread utility and indiscriminate disposal has also left an adverse and enduring environmental legacy.

Hedrick and Waymouth’s new methods for generating biodegradable and biocompatible plastics could, for example, eliminate the leaching of antimony, the toxic metal from commercial poly ethyleneterephthalate (PET) commonly used to make water bottles.

Achieving this vision, however, will require: 

  • The conversion of renewable resources to products with the cost and performance equal or superior to existing materials.
  • The development of more environmentally benign catalytic processes.
  • The implementation of recycling or biodegradation strategies that would enable a closed-loop life cycle for these materials.
Catalysis is a foundational pillar for sustainable chemical processes and the discovery of highly active, environmentally benign catalytic processes is a central goal of green chemistry. Environmentally sustainable plastics, smarter recycling methods, new ways to deliver medicine – these are all areas that could benefit from these recent discoveries in green polymer chemistry.

Blue Gene/Q delivers a smarter planet in record speed

Editor’s note: This article was written by Michael Rosenfield, IBM Research’s director of Deep Computing Systems.

IBM’s Blue Gene/Q supercomputer, Sequoia, at the Lawrence Livermore National Lab took the number one ranking in the TOP500 list of the world’s fastest machines. And 21 other Blue Gene/Q configurations also earned spots on the list – including four in the top 10. Quite an achievement, given that IBM only started shipping these systems in volume to clients earlier this year.

What’s inside Sequoia?

Sequoia is a Blue Gene/Q supercomputer built on IBM Power architecture. It consists of 96 racks; 98,304 compute nodes; 1.6 million cores; and 1.6 petabytes of memory.

Compared to its predecessors, it is 90 times more powerful than ASC Purple, and eight times more powerful than BlueGene/L, relative to the peak speeds of these systems.
The  Sequoia system used by Lawrence Livermore (LLNL) can reach over 16 sustained petaflops – just one petaflop runs 1015 floating-point operations per second! With this power, Sequoia will soon be used to simulate phenomena such as uncertainty quantification (the quantitative characterization and reduction of uncertainty in outcome scenarios across the natural sciences and engineering), hydrodynamics, and the physical properties of materials at extreme pressures and temperatures.

Today, the National Nuclear Security Administration uses Sequoia to research the safety, security and reliability of the United States’ nuclear deterrent – replacing the need for underground testing.

Argonne National Lab's (ANL) Blue Gene/Q-based system Mira is the third-fastest system in the world. Mira is being used to significantly advance science and industry. In science, ANL’s exploration ranges from studying the evolution of our universe to simulating the strong force of subatomic particles. In industry, ANL is working to design more-efficient electric car batteries; understand global climate change; design fast neutron reactors capable of eliminating nuclear waste, and decipher the complexities of the biological world. 

Researchers across academia, government and industry from around the world access Mira through blocks of compute time awarded via the peer-reviewed, competitive INCITE program.

Blue Gene is not only being adopted by our partners at the national labs, but also by our industry partners. For example, Électricité de France, the world's largest utility company, uses the Blue Gene/Q that ranked  #40 in the TOP500 to better manage operations for its electricity generation and distribution business.

Read a related article on the Smarter Planet blog.

The Blue Gene/Q project has been supported and partially funded by the Argonne and Livermore labs on behalf of the United States Department of Energy. In addition IBM gratefully acknowledges the collaboration with Columbia University and Edinburgh University, which also participated in the project..

Top500 IBM Supercomputer Highlights
  • The top-ranked system, LLNL's Sequoia Blue Gene/Q can reach 16.32 Petaflops
  • IBM has four other supercomputers in the Top 10:
    • #3 ANL-Mira Blue Gene/Q
    • #4 LRZ-SuperMUC iDataPlex Direct Water Cooled dx360 M4
    • #7 CINECA -Fermi Blue Gene/Q
    • #8 Juelich-JuQUEEN Blue Gene/Q
  • IBM has the most systems in TOP500 with 213 
  • The SuperMUC iDataPlex is the fastest system in Europe
  • IBM has the 20 most-energy-efficient systems on the list (all IBM Blue Gene/Q systems)


Research & Development in Europe

This article by Rich Hume, General Manager of IBM Europe, originally appeared on the IBM Smarter Planet blog.

In an ever more globally integrated economy, Europe has headlined one of its key competitive differentiators: Research and Development.

A fact acknowledged by Horizon 2020, the European Union’s ambitious € 80 billion program for research and innovation.

Part of the drive to create new growth and jobs in Europe, Horizon 2020 will see projected EU research investment increase by as much as 46% compared to the current EU research programs, when it begins in 2014.

That’s no small bet.

As it stands, the EU’s current round of research investment funding is expected to create around 174,000 jobs in the short-term and up to 450,000 jobs and € 80 billion in GDP growth over 15 years.

Read IBM General Manager of IBM Europe Rich Hume's complete article about research and development in Europe.

Supercomputing in Poland

Professor Marek Niezgodka, director of the Interdisciplinary Center for Mathematical and Computational Modeling (ICM) at the University of Warsaw, explains how ICM operates and uses supercomputing for advanced research.