The Role of Mathematical Sciences in Industry

Trends and Case Studies

In this section of the report, we give a broad but not exhaustive survey of the business applications of mathematics. We hope that the 18 case studies presented below will provide some answers to students who want to know what mathematics is used for “in the real world.” Most of the case studies are applications we heard about on our site visits, supplemented by published articles.

2.1 Business Analytics

The software industry is making a big bet that the data-driven decision making … is the wave of the future. The drive to help companies find meaningful patterns in the data that engulfs them has created a fast-growing industry in what is known as “business intelligence” or “analytics” software and services. Major technology companies—IBM, Oracle, SAP, and Microsoft—have collectively spent more than $25 billion buying up specialist companies in the field.
 [Lohr, 2011-a]

“Business analytics” has become a new catchall phrase that includes well-established fields of applied mathematics such as operations research and management science. At the same time, however, the term also has a flavor of something new: the application of the immense databases that are becoming more and more readily available to business executives.

Mathematical approaches to logistics, warehousing, and facility location have been practiced at least since the 1950s. Early results in optimization by George Dantzig, William Karush, Harold Kuhn, and Albert Tucker were encouraged and utilized by the US Air Force and the US Office of Naval Research for their logistics programs. These optimization techniques, such as linear programming and its variations, are still highly relevant to industry today.

The new opportunity, both for businesses and for students hoping to enter industry, lies in the development of algorithms and techniques to handle large amounts of structured and unstructured data at low cost. Corporations are adopting business intelligence (i.e., data) and analytics (i.e., quantitative methods) across the enterprise, including such areas as marketing, human resources, finance, supply chain management, facility location, risk management, and product and process design.

Case Study 1: Predictive Analytics

In 2009 and 2010, IBM helped the New York State Division of Taxation and Finance (DTF) install a new predictive analytics system, modeled in part on IBM’s successful chess-playing program Deep Blue and its Jeopardy!-playing engine, Watson. The Tax Collection Optimization Solution (TACOS) collects a variety of data, including actions by the tax bureau (e.g., phone calls, visits, warrants, levies, and seizure of assets) and taxpayer responses to the actions (e.g., payments, filing protests, and declaration of bankruptcy). The actions may be subject to certain constraints, such as limitations on the manpower or departmental budget for a calling center. The model also includes dependencies between the actions. TACOS predicts the outcome of various collection strategies, such as the timing of phone calls and visits. The mathematical technique used is called a Markov decision process, which associates to each taxpayer a current state and predicts the likely reward for a given action, given the taxpayer’s state. The output is a plan or strategy that maximizes the department’s expected return not just from an individual taxpayer, but from the entire taxpaying population.

In 2009 and 2010, TACOS enabled the DTF to increase its revenue by $83 million (an 8% increase) with no increase in expenses. The results included a 22% increase in the dollars collected per warrant (or tax lien), an 11% increase in dollars collected per levy (or garnishment), and a 9.3% reduction in the time it took cases to be assigned to a field office, [Apte, 2011]. Similar methods, though different in detail, could clearly be applied by other businesses in areas like collections and accounts receivable.

Case Study 2: Image Analysis and Data Mining

SAIC is a company that develops intelligence, surveillance, and reconnaissance (ISR) systems for military applications. These automated systems have been heavily exploited during the war in Afghanistan: in 2009, unmanned aerial vehicles (UAV) captured 24 years of full-motion video, and in 2011 they were expected to capture thirty times that amount.

This poses an obvious problem: How can the information in the videos be organized in a useful way? Clearly the army cannot deploy thousands of soldiers in front of computer screens to watch all of those years of video. Even if they could, humans are fallible and easily fatigued. In hours of surveillance video it is easy to miss the one moment when something isn’t right—say, a car that has previously been associated with bomb deliveries drives up to a particular house.

SAIC developed a “metadata” system called AIMES that is designed to alert humans to the possible needles in the haystack of data. First, AIMES processes the video to compensate for the motion of the UAV—itself an interesting mathematical challenge. Then it searches for objects in the field of vision and stores them in a searchable database. It also “fuses” other kinds of data with the video data—for example, if the operators of the UAV say, “Zoom in on that truck!” the program knows that the object in the field of view is a truck and it may be of interest. Finally, AIMES is portable enough to be deployed in the field; it requires only a server and two or three monitors. See [“SAIC AIMES’ 2010].

While stateside industry may not have quite as many concerns about terrorists or roadside bombs, audio and video surveillance are very important for the security of factories or other buildings. Cameras and microphones can be used for other purposes as well; for example, a microphone might be able to tell when a machine isn’t working right before human operators can.  Surveillance devices can also help first responders locate victims of a fire or an accident. See [“SAIC Superhero Hearing” 2010].

Case Study 3: Operations Research

In 2002, Virginia Concrete, the seventh-largest concrete company in the nation, began using optimization software to schedule deliveries for its drivers. The company owns 120 trucks, which had been assigned to 10 concrete plants. A significant constraint is that a cement truck has roughly two hours to deliver its load before it starts hardening inside the truck. Also, the construction business is very unpredictable; typically, 95 percent of a company’s orders will be changed in the course of a day.

Virginia Concrete brought in mathematicians from George Mason University and Decisive Analytics Corporation to develop tools to automate truck dispatching. Among other changes, the mathematicians found that the company could improve delivery times significantly by moving away from the model in which individual trucks were assigned to a “home” plant. Instead, they recommended that trucks should be able to go to whichever plant is closest. Also, in overnight planning it turned out to be useful to include “phantom” trucks, representing orders that were likely to be canceled. If the order was not canceled, it could be reassigned to a real truck.

For testing purposes the company used the software to make all of the scheduling decisions; however, since system’s installation, dispatchers have been allowed to override the computer. The system has enabled Virginia Concrete to increase the amount of concrete delivered per driver by 26%. [Cipra 2004].


2.2 Mathematical Finance

…there is likely to be less emphasis on exotic derivatives and more trading will take place on exchanges.

In the future, models will have to have realistic dynamics, consistent with observation. Control of execution costs will also be critical, and for that, a good understanding of market microstructure and trade data will be essential. [From interviews.]

Quantitative methods in finance got a black eye from the credit crisis of 2007 and 2008, which in many circles was interpreted as a failure of quantitative models to account for dependencies in market data. Risk models assumed that real estate defaults in, say, Miami and Las Vegas were independent of one another; or at least that the correlations were small. But in a panic situation, all of the correlations went to one.

However, in the fallout of the crisis and subsequent recession, financial managers learned some very worthwhile lessons. They have learned that mathematical models are not just plug-and-play; you have to seriously examine the assumptions behind them. The failure of certain simplistic models does not mean all mathematical models are bad; it means that the models have to become more realistic. Above all, it is important for students to realize that the financial industry is not fleeing from quantitative analysis. Mathematicians and applied mathematicians are still in great demand; their skills will become even more valued as quantitative models become more sophisticated and as managers try to understand their limitations. It may be the case, though, that students’ mathematical skills should be backed up by a greater knowledge of the financial industry than was needed in the past.

Case Study 4: Algorithmic Trading

In 2009, Christian Hauff and Robert Almgren left Bank of America, the world’s top firm in algorithmic trading of stocks and derivatives, to form a new company called Quantitative Brokers. They saw an opportunity to apply the same principles of high-frequency trading to a class of assets that had not yet become highly automated: interest-rate futures.

Automated trading has become commonplace in the options market, in part because the tools of mathematical finance require it. Large banks want to hold their assets in a risk-neutral way, which allows them to make money (or at least avoid losing money) no matter which direction the market moves. In the early 1970s, Fischer Black and Myron Scholes discovered how to do this with a strategy called dynamic hedging, which requires constant small trades.

The main emphasis of Black and Scholes’ work was the pricing of options. It took another two decades for financial engineers to start taking into account the process of execution of a trade. There are many reasons for not executing a trade all at once. You may want to wait until trading partners, who are willing to give you a good price, arrive, or until the market moves toward your target price. If your trade represents a significant fraction of the market for an asset in a given day, you may want to move slowly to avoid unduly influencing the market price.

Trade execution is Quantitative Brokers’ main business. The company uses computer algorithms to plan a strategy for a path that leads from a client’s position at the beginning of the day to the desired position (say, buying X lots of Eurodollar futures at a price less than Y) at the end of the day. Each client has a certain degree of risk aversion, so the client’s utility function will be a linear combination of expected profit and expected risk. Quantitative Brokers’ STROBE algorithm finds the trajectory that optimizes the client’s utility function, and it generates an envelope around the optimum that summarizes the range of acceptable deviations. Mathematical tools include differential equations and the calculus of variations.
See [“Anatomy of an Algo” 2011.)


2.3 Systems Biology

Pharmaceutical researchers have undertaken many initiatives and technologies to stem the rising costs of drug discovery and development. Biomarkers, adaptive trial designs, modeling, trial simulations, predictive metabolism, data mining, and disease models have reshaped the way in which researchers approach discovery and development. Quantitative pharmacology, which leverages model-based approaches, operates at both cultural and technical levels to integrate data and scientific disciplines, … [Allerheiligen 2010]

The completion of the Human Genome Project in 2000 was supposed to usher in a new era of individualized medicine and targeted drug discovery. However, it turned out that only a few uncommon diseases or disease variants result directly from individual mutations in the human genome. Most common disorders—such as diabetes and the number one target of drug research, cancer—arise from the malfunctioning of complicated networks of genes. The idea of treating such diseases by fixing one gene is beginning to look as naïve as the idea of fixing an engine by replacing one screw. Instead, doctors may need a whole sequence of interventions, in targeted amounts, at particular times and in particular places in the gene network. As the complexity of gene networks becomes more apparent, mathematical methods for their analysis will become more important.

Some of the focus of research in biotechnology has shifted away from genomics to other “omics,” such as proteomics, which seeks to understand the shape and folding of proteins that might become targets for drugs. Molecular dynamics simulations start at the most fundamental level, using the principles of quantum mechanics. Recent advances in algorithms, software, and hardware have made it possible to simulate molecules containing tens of thousands of atoms for up to a millisecond—the time scale at which many important biological processes happen.

Other mathematical models go in the opposite direction and operate on the level of the whole organism. These models are used, for example, to predict how a population of patients—each one with his or her unique physiology—might respond to a proposed public health intervention. Eventually, whole-patient models may become integrated with genomic data to make truly individualized medicine possible.

The mathematical and computational techniques behind these models include network science, deterministic and stochastic differential equations, Bayesian networks and hidden Markov models, optimization, statistics, control, simulation, and uncertainty quantification.

Case Study 5: Molecular Dynamics

In 2001, David Shaw, a computer scientist who had previously been the CEO of a hedge fund that used computer-based trading strategies, launched a new private research laboratory that would be devoted to the problem of protein folding. Shaw’s lab developed a supercomputer named Anton with 512 chips that were custom-built to accelerate the computation of atomic interactions. Even with such powerful hardware, though, a brute-force simulation of a protein molecule is not feasible in any reasonable amount of time. The other key ingredient is the Shaw lab’s molecular dynamics software, called Desmond, which uses judicious approximations to simplify the calculation of the force fields and also employs novel parallel algorithms that reduce the amount of communication required between Anton’s processors.

There was no guarantee at the outset that Anton would work better than other approaches, such as algorithms that divide the calculation up and parcel it out to many different computers. (This was the approach taken by Stanford University’s Folding @ Home project.) However, D. E. Shaw Research announced in 2010 that it had simulated the folding and unfolding of a protein called FiP35, which contains 13,564 atoms, over a period of 100 microseconds. This was a tenfold increase over the length of time simulated by the best previous programs. The simulation took about three weeks to run.

The investigators chose FiP35 because its folded and unfolded structures were well understood experimentally. Even so, the simulation produced new scientific insight, by showing that the pathway from the folded to unfolded states was essentially the same each time. Science magazine named Shaw’s simulation one of the top ten breakthroughs of the year across all fields of science. In the future, such simulations may make it possible to study drug-protein interactions that occur too rapidly to be studied in a traditional laboratory. See [D’Azevedo 2008].

Case Study 6: Whole-Patient Models

By 2020, virtual cells, organs, and animals will be widely employed in pharmaceutical research. – PricewaterhouseCoopers, Pharma 2020: Virtual R&D

Two San Francisco Bay area companies, Entelos and Archimedes, Inc., are pioneering the field of computer modeling of the whole body. Although neither is close to a comprehensive simulation of human biology, both of them do model major subsystems, such as the cardiovascular system and the metabolic networks involved in diabetes.

Entelos’ model, called PhysioLab, and the Archimedes Model can be used to predict adverse reactions as well as the outcome of clinical drug trials. Clearly, drug companies could save a great deal of time and money by screening out ineffective or harmful compounds before going to the expense of setting up a clinical trial. In addition, simulations can explore the effects of multiple-drug therapy, which is very difficult to do in clinical trials. While 20 different combinations of drugs might require 20 different clinical trials, a simulation can quickly hone in on the one combination that is most likely to be effective.

For example, Archimedes, Inc. was asked by a client (a large HMO) to evaluate the effectiveness of a new preventative treatment regimen called A-L-L (aspirin, lovastatin, and lisonopril) for patients with diabetes or heart disease. The Archimedes Model predicted that the combination therapy should reduce heart attacks and strokes in the target population by 71%. A subsequent clinical study confirmed the model, finding about a 60% reduction. The HMO subsequently recommended to all of its participating doctors that they prescribe the new regimen to their patients who matched the criteria for treatment.

The fields of mathematics and computer science used by Entelos and Archimedes include nonlinear dynamics, control theory, differential equations, and object-oriented programming.

See [“Virtual Patients 2010] and [“The Archimedes Model” 2010]


2.4 Oil Discovery and Extraction

For the oil production business, now is a time of risk and opportunity. Despite worldwide concerns about climate change and pressures to reduce our carbon footprint, our society remains heavily dependent on oil and natural gas for the near future. Dire prognostications about “peak oil” have so far not come to pass—in large part because they underestimated the ability of the oil industry to innovate and unlock new, “unconventional” sources of oil.

Enhanced production techniques—injecting carbon dioxide into the ground—make it possible to recover more oil from existing wells, and also sequester carbon that would otherwise be released into the atmosphere. Heavy oil deposits once considered too expensive to develop, such as the tar sands of Alberta and the oil shales of Colorado and Wyoming, have become more attractive as the price of oil has gone up. Deepwater drilling has also picked up momentum, bringing new risks that became apparent with the 2010 BP oil spill in the Gulf of Mexico.

As oil becomes more difficult to find and more expensive to extract, mathematical algorithms and simulations play an ever more important role in both aspects of the business. Inversion of seismic data (using seismic traces to map subsurface rock formations) has long been an important ingredient in oil prospecting. Advances in algorithms and computer hardware and software have brought three- and four-dimensional simulations within reach. Large-scale basin models help companies decide whether a rock formation is a promising candidate for drilling. Smaller-scale reservoir models are used while a field is in active production to predict the flow of oil within the field, to devise strategies for optimizing the rate of extraction, and to anticipate problems such as the reactivation of a geological fault due to changes in stress within the reservoir rocks.

Dynamic simulations also enable oil companies to analyze and minimize the risk of accidents before a facility is certified for production. However, the need for better risk analysis and modeling was brought home by the BP oil spill. Clearly, faster models using real-time data are needed to monitor conditions in a well and assess damage in the case of an unexpected event.

Case Study 7: Basin Modeling

An oil reservoir requires very special geologic circumstances to develop. There must be a source rock (sedimentary rock containing organic matter), a reservoir rock into which the oil migrates (usually not the same as the source rock), a trap (impermeable rock) that keeps the oil from escaping to the surface, and overburden rock that forces the source rock far underground, so that temperature and pressure will “cook” the organic material and create oil. Even if all four ingredients are present, there still may not be any oil, because timing is crucial. If the geological trap forms too late, then the oil will be long gone.

Basin models simulate all of these processes from basic physical principles. For instance, Schlumberger’s PetroMod software starts with information about the ages and properties of each layer of rock. It computes the pressure and temperature of each layer through geologic time, and models the resulting effect on the rock’s porosity, density, and other properties. This information is fed in turn into chemical models that simulate the generation of petroleum and its breakdown into gas and oil of different molecular weights. Fluid-flow models track the migration of the hydrocarbons, taking into account whether they are in liquid or gaseous form, how permeable the rock is and whether there are faults. The results of the model are validated against current measurements from trial boreholes. In many cases, the simulations are run multiple times with different parameters to ascertain the effect of uncertainty in the data.

All in all, basin models are a very interdisciplinary production that combines fluid flow, heat flow, chemical kinetics, geology, differential equations, and stochastic analysis, as well as some of the most intense supercomputing on the planet. Billions of dollars can be at stake.

Two examples illustrate the upside of basin modeling and the downside of not doing it. Near the Prudhoe Bay oil field in Alaska lies another prospect called Mukluk, where oil companies spent $1.5 billion for lease rights in the early 1980s. It was called “the most expensive dry hole in history.” Although the geologic formation closely resembled Prudhoe Bay, either the time sequence was wrong or the trap rock was ineffective, and there was no oil to be found. For a more positive example, Mobil and Unocal purchased rights to a deepwater site off Indonesia, called the Makassar Straits, which according to conventional wisdom was a poor candidate for drilling because the source rock was “postmature.” However, Mobil’s computer models indicated that oil was still being generated in the source rock. A test well in 1998 proved that the computer models were right, and Unocal began production at the site in 2003. It was the first deepwater oil field in Indonesia, and its peak production was about 20,000 barrels per day. See [Al-Hajeri 2009].


2.5 Manufacturing

Applied mathematics continues to be an integral part of manufacturing in many different ways: designing prototypes, optimizing designs, verifying the designs, production and inventory planning, and managing supply chains.

Multidisciplinary design optimization (MDO) provides procedures and analytic and computational tools for coordinating efforts of design teams from multiple disciplines. Simulation-based design of complex systems in aerospace and automotive systems, for example, relies on computer analysis (including computational fluid dynamics and finite-element analysis). One of the major challenges still facing the computer-aided design (CAD) industry is to unify design, analysis, and verification into one seamless process. Too often, design engineers and verification engineers use different algorithms, different software and different file types. This creates a bottleneck, as the CAD files have to be converted from one form to another. Isogeometric analysis is a promising new technique used to create three-dimensional virtual models that can be plugged directly into physical differential equations.

The goal of production planning is to deliver a build schedule that makes efficient use of capital resources while satisfying as much demand as possible. The build schedule needs to take into account the flexibility of production resources, the stochastic nature of supply and demand within the supply chain, and the timing of new product releases and production facility improvements. Planning processes that rely on heuristic, manual decision-making are not adequate in industries with complex mixtures of products and manufacturing processes. Better decision algorithms, improved data management, and an automated and integrated planning process are needed.

Case Study 8: Virtual Prototyping

In 1992, design and performance prediction of tires at Goodyear Tire & Rubber took months of computer time using finite-element analysis. Although tires appear simple from the outside, they have a very complex geometry with 18 or more components blended into a single tire, each made of different materials such as rubber, polyester, steel, and nylon. Rubber itself is one of the most complicated materials known to engineering. And because Goodyear’s competitive edge is in the design of all-season tires, the tire’s performance has to be evaluated under every driving condition.

Even though Goodyear had supercomputers, they recognized that the way they were setting up the models made them completely impractical. In 1994, Goodyear entered into a Cooperative Research and Development Agreement (CRADA) with Sandia National Laboratories, which gave them access to Sandia’s physical modeling and simulation expertise. Over the next decade, Goodyear and Sandia developed new software that compressed the solution time for complex models. As a result, Goodyear could for the first time do computer simulations in advance of road tests. The “innovation engine” that came out of the project reduced development times from three years to one and costs for prototypes by 62%.

Best of all from the company’s point of view, the partnership resulted in new, award-winning products, such as the Assurance tire with TripleTred Technology, which contains separate zones for traction on water, ice, and dry roads. The TripleTred Technology won an R&D100 award from R&D Magazine. See [“A New Approach”, 2005] and [Sandia 2009].

Case Study 9: Molecular Dynamics

Molecular dynamics is not only used in biotechnology or pharmaceutical research.  Procter and Gamble (P&G), like many other companies, is under market pressure to replace the petroleum-based materials in its products with so-called “green” materials. At the same time, the company is not willing to sacrifice the performance that its customers have come to expect. For example, a variety of factors go into customers’ expectations for a dishwashing detergent: its thickness, its “feel,” its foaming characteristics and mixing properties, and its separation over the product’s lifetime. To develop new chemicals with the desired properties requires fundamental research into surfactants and polymers at the molecular level.

Unfortunately, lab experiments alone cannot do the job. The self-assembling structures that produce a foam are too small to be seen through a microscope. In order to visualize the foaming process, P&G turned to computer molecular dynamics simulations.

However, the company’s supercomputers were to a large extent booked for other research projects as well as for routine production tasks. At most, they could have simulated a few thousand atoms rather than the billions that were required. P&G applied for access to Argonne National Laboratory’s high-performance computers through the Department of Energy’s INCITE program. Working with researchers from the University of Pennsylvania, company scientists reduced their simulation times from months to hours and improved the formulation of the company’s products. In the future, the company hopes to use molecular dynamics simulations to create new “designer” molecules. See [“Procter and Gamble’s Story” 2009].

Case Study 10: Multidisciplinary Design Optimization and CAD

In October 2011, the Boeing 787 Dreamliner made its first commercial flight, from Tokyo to Hong Kong. Built in response to the increasing price of jet fuel, the 787 is the first commercial plane to be predominantly made of composite materials (carbon-fiber reinforced plastic) rather than aluminum. These materials have a higher strength-to-weight ratio than aluminum, which allows the plane to be lighter and use 20 percent less fuel than any comparably-sized airplane. The 787 also has larger windows and can withstand higher interior pressures, thus giving passengers a more comfortable environment and possibly reducing jet lag.

There were many engineering challenges involved in designing a plastic airplane. For instance, the wings of the 787 flex upwards by 3 meters during flight. Traditional rigid-body models, which describe the wing’s shape correctly in the factory or on the ground, do not describe it correctly during flight. To an aerodynamic engineer and a structural engineer, it looks like two different wings—and yet both engineers have to work from the same computer model. The computer has to “know” how the wing will bend in flight.

The entire plane took shape, from beginning to end, on computers; there was not a single drawing board or physical prototype. Each of the more than 10,000 parts, made by 40-plus contractors, was designed in the same virtual environment. The contractors are not just suppliers but are actually co-designers. The virtual environment also facilitates “direct design.” If a customer (i.e., an airline) wants a particular feature, whether it is different doorknobs or different floor plans, an engineer can draw it up on the computer and then it can be built. The days of the assembly line, when every product was the same as every other, are coming to an end.

The mathematical tools involved in the design process include computational linear algebra, differential equations, operations research, computational geometry, optimization, optimal control, data management, and a variety of statistical techniques. See [Grandine 2009] and [Stackpole 2007].

Case Study 11: Robotics

In industry, it isn’t just the final product design that has to be right. The process for making that product also impacts the bottom line. Automated Precision Inc. (API) of Rockville, Maryland, recently introduced a technology that combines laser tracking with polynomial-based kinematic equations to improve the accuracy of machine tools. Typically, robotic machine tools have arms with three axes of rotation. Each link in the arm is controlled separately, leading to cumulative errors in three different coordinate systems and 21 error parameters overall. In API’s Volumetric Error Compensation (VEC) system, the entire machining space is expressed in one coordinate system, with only six error parameters. Using algorithms based on Chebyshev polynomials, the VEC software can then compute the proper tool path in any other coordinate system.

One of API’s aerospace customers stated that VEC reduced the time required to calibrate its machine tools from “one week of 12-14 hour days to one eight-hour shift.” Another customer estimated that the process would reduce its assembly and fitting costs by $100 million per year. R&D Magazine cited VEC as one of its 100 technological breakthroughs of the year in 2010. See [“Precision Machining” 2010].

Case Study 12: Supply Chain Management (Biotechnology Industry)

Once you’ve designed the product and built it, you still have to get it to market. This seemingly elementary step can actually be very complex. An instructive example of automated supply chain management took place at Dow AgroSciences, an international company that makes pesticides and other biotechnology products.

The pesticide market is highly regulated and taxed, and the route a product takes from country to country can strongly affect the amount of duties that have to be paid. In addition, certain countries will not allow importation of certain chemicals from certain other countries. Thus, the source of every ingredient in every product has to be tracked.

At first Dow tried using an external vendor to automate its supply chain, but the unique characteristics of their business eventually forced them to model the supply chain in-house. The model represents the supply chain as a directed graph or network, with arrows indicating feasible routes from suppliers to factories to other factories to customers. Decision variables include inventories and quantities sold and produced; parameters include tax rates, shipping and material costs. In all, the network includes half a dozen suppliers, three dozen factories and more than 100 customers (each country counts as one customer). The most cost-effective route for every product can be found by solving a mixed-integer linear programming problem.

The problem is actually harder than the above figures suggest because every path through the network requires a separate set of decision variables. With about 2,100 pathways and 350 final products, the linear program contains about 750,000 variables and half a million equations. Even so, it is generally possible to find the profit-maximizing solution for a single business scenario on a quad-core workstation in two hours. (See [Bassett and Gardner, 2010].)

Case Study 13: Supply Chain Management (Automotive Industry)

In 2006, Ford Motor Company was on the brink of a “gruesome supply chain failure. Its major supplier of interior parts, a company called Automotive Component Holdings (ACH) that was owned by Ford but operated as an independent business, was losing money. ACH manufactured its parts in two underutilized plants in Saline and Utica, Michigan. The company faced an urgent decision: whether to close both plants and outsource all the production to other suppliers (including relocating much of the production machinery), consolidate both operations into one plant, or pursue a mixed strategy of outsourcing and consolidation.

Ford’s management quickly realized that the number of possibilities to evaluate—involving the disposition of more than 40 product lines, requiring 26 manufacturing processes, among more than 50 potential production sites—was far beyond “traditional business analysis.” Over a two-month period, Ford’s research department constructed a model of the constraints and costs for every phase of production. Unfortunately, the model had 359,385 variables and 1,662,554 constraints. Even worse, the problem was nonlinear (primarily because of the effects of capacity utilization).  A mixed integer linear program of this size can be solved (cf. Case Study 12), but a nonlinear program, in general, cannot.

The researchers came up with an ingenious workaround. They split the large module up into a facility capacity model and a facility utilization model, each of which was linear. By passing the solutions back and forth between the two models in iterative fashion, they were able to converge on optimal solutions for both. These solutions provided a crucial tool for management, because it weighed numerous scenarios. The model identified a mixed strategy that saved Ford $40 million compared to the originally preferred strategy of complete outsourcing. In the end, 39 of the model’s 42 sourcing decisions were approved by Ford’s senior management. The Saline plant remained open, and its restructured business improved to the point where Ford was able to find a qualified buyer. See [Klampfl, 2009].


2.6 Communications and Transportation

Both the communications and transportation industries have long been active users of mathematics. Some of the earliest applications of operations research were to the scheduling of supply networks, and that continues to be the case today. Algorithms to direct traffic on the Internet and codes that enable everybody’s cell phone to share the same bandwidth have been crucial to the commercial success of the Internet and wireless communication industries, respectively.

Case Study 14: Logistics

If any company is synonymous with the word “logistics,” thanks to its advertising program, that company is the United Parcel Service (UPS). The company now operates the ninth-largest airline in the world, one with no human passengers but a lot of cargo. To figure out how to get all those Christmas packages where they’re going, without wasting any space on the planes, it is no surprise that the company has turned to computer algorithms and operations research.

In fact, UPS uses three layers of software, which can be described as short-term, medium-term, and long-term planning. The long-term software projects capacity 10 years into the future, and is used, for instance, to make decisions about acquiring new companies. Medium-term optimization allows the company to plan routes. The short-term optimization tool, called the Load Planning Assistant, helps each hub plan its operations up to two weeks in advance. In addition, a system-wide tool called VOLCANO plans next-day operations for the airplane network, figuring out how to match the current number of packages to the planes available, taking into account their capacities and airport constraints. Both LPA and VOLCANO were developed in collaboration with academic researchers, at Princeton and MIT respectively.

Because UPS has used operations research for more than 50 years, it is difficult to say how much money these programs have saved, but it is fair to say that the company’s ongoing reputation depends on them. See [“Analytics at UPS” 2011].

Case Study 15: Cloud Computing

When Hurricane Katrina hit New Orleans in 2005, the American Red Cross website experienced a sudden 14-fold increase in traffic. The website crashed, preventing donors from making urgently needed donations. The Red Cross contacted Akamai Technologies to deal with the crisis. Within eight hours their website was running again and the donations were flowing. The Red Cross has continued to work with Akamai ever since, coping successfully with a 15-fold increase in traffic during the California wildfires of 2009 and a 10-fold spike after the Haiti earthquake in 2010. See [“American Red Cross” 2010].

Akamai is in the business of operating high-volume websites, and its recipe for success has both hardware and software components. Much of the slowdown in online traffic occurs in the Internet’s disorganized and unpredictable “middle mile.” To a considerable extent, Akamai can circumvent the middle mile by assigning most of the on-the-fly computing to Internet servers that are close to the individual user. This enhances the user’s perception of the responsiveness and interactivity of a website. Akamai manages more than 35,000 servers, so it has a server close to almost everyone.

Still, the servers do have to talk with each other over the “middle mile,” and Akamai remains committed to using the public network rather than building a proprietary one (which would be prohibitively expensive). The company works around the limitations of the middle mile in a variety of ways. It distributes software to all of its servers that improves on clunky, standard Internet protocols. Also, load-balancing and load-managing software expects and plans for failures in parts of the network, so that alternate routes are found automatically. As a result, the network operates with little human intervention; on average, only 8 to 12 people are required to keep all 35,000 servers running.

Akamai has always depended very heavily on mathematical and computational techniques such as probabilistic algorithms, combinatorial optimization, load balancing, graph theory, discrete mathematics, and operations research. It also supports mathematical education through the Akamai Foundation.


2.7 Modeling Complex Systems

We traditionally sold components for other people’s products. Now we are also selling systems. That changes the character of our business. Mathematics, analysis, simulation, and computation have become essential. [From interviews.]

Mathematical modeling is a key technology in complex systems engineering, from analyzing multi-scale systems in science to evaluating architectural tradeoffs to verifying system designs. Modeling, analysis, simulation, optimization, and control reduce the length of the product design cycle. They also help to document, visualize, and ensure the quality of the resulting system, and identify and estimate risks of large failure events. Complex distributed systems include the Next Generation Power Grid (or “smart grid”) [Beyea, 2010], traffic networks, water supply systems, energy efficient buildings, and medical information networks. For more information on the mathematical challenges in complex systems, see [Hendrickson, B.A. and Wright, M.A. 2006).

Another kind of complexity is the nonlinear behavior exhibited by many scientific and engineered systems, often exhibiting such behavior at multiple scales. This means that small, incremental changes to the inputs can sometimes lead to large and unpredictable changes in the output. Nonlinear dynamical systems remain an active field of research, to which a mixture of theoretical mathematics and computational techniques can be brought to bear.

Case Study 16: Viscous Fluid Flow

Ordinarily most of us don’t think about what our computer or television screens are made of—we’re more interested in what we see on the screen. Nevertheless, new glass technology has been a significant contributor to the spectacular commercial success of large flat-screen TVs, computer monitors, and smart phones in recent years.

As liquid crystal display (LCD) technology advances, thickness uniformity specifications, flatness specifications, and defect limits have been getting more and more stringent. More importantly, the tempo at which customers expect improvements has increased several fold. Corning, a leading manufacturer of LCD glass substrates, uses mathematical models to explore process advancements to improve the attributes of its glass. These models, like the formulation of the glass, are continually refined over time. For instance, one process, called the fusion-draw process, involves two steams of molten glass that flow down the sides of a V-shaped trough and merge into a planar sheet. Modeling the flow of this sheet, and understanding instabilities such as oscillation and buckling, requires the solution of a complex system of nonlinear differential equations.

The use of mathematical models enables Corning to introduce new products at a rapid pace and with reduced technology risk. An example is Corning Gorilla glass, which differs from LCD glass in composition and hence in its behavior during the sheet manufacturing process. The use of models enabled Corning to rapidly find the process window for manufacturing the new composition. As a result, only limited process start-up experimentation was needed, and the product introduction time was shortened from years to months. See [“Glass once used” 2012].

Case Study 17: Smart Cities

In 2008, for the first time, more than half of the world’s population lived in cities. In the U.S., more than four-fifths do. As the population becomes increasingly urbanized, it has become a greater challenge to manage the traffic, public safety, water, power, and health-care systems that sustain all these people. IBM has become a leading proponent of “smart cities,” a movement that will surely grow. Two examples illustrate the kinds of projects that IBM has worked on.

In 2008, the District of Columbia Water and Sewer Authority (DC Water) contracted with IBM Global Services to improve the management of its infrastructure. IBM installed a database that kept track of every asset in the system, down to the last pipe and manhole cover. As a result, DC Water could begin to anticipate problems rather than merely react to them. The authority made better-informed decisions about repairs, service-call volumes were reduced, and defective meters were replaced. All of these factors enabled DC Water to save $20 million over three years, an impressive return for a project that cost less than $1 million. See [“DC Water” 2011].

Information technology is changing the way that police departments do business. IBM helped New York install a new database, the Crime Information Warehouse, which enables analysts to detect crime patterns in real time. Memphis has gone one step further, using IBM statistical and predictive software to forecast which precincts will see more crime activity. While it is impossible to know the effect of these initiatives with certainty, New York’s serious crime rate has dropped by 35 percent since 2001, and Memphis’ has dropped by 30 percent since 2004.
See [“Memphis PD” 2011].

In, the police department has networked an estimated 15,000 surveillance cameras around the city, in a project called Operation Virtual Shield. When a crime is reported, the system can immediately call up a live video feed from the nearest camera (as well as recorded video from the time when the crime occurred). Chicago police say that the system has aided in thousands of arrests. See [Bulkeley 2009].

Areas of mathematics and computing involved in these projects include data mining, data storage, biometrics, pattern recognition, risk assessment, statistics and statistical modeling.


2.8 Computer Systems, Software and Information Technology

Watson’s advances in deep analytics and its ability to process unstructured data and interpret natural language will now be tailored to fit the requirements of new solutions in science, healthcare, financial services, and other industries. [Groenfeldt, 2011]

Many businesses are interested in high-performance computing (or “supercomputing”) to address current industrial problems. As shown in some of the above case studies, simply owning a supercomputer is not enough. Businesses need programming and modeling expertise, numerical libraries, and a broad range of software tools that will work on parallel and distributed platforms. Often, businesses of small to medium size cannot afford to build their own IT structure to support high-performance computing, but they can greatly improve their modeling capability by using software on a large distributed network (i.e., cloud computing).

Other rapidly growing areas of IT are computer vision and imaging, natural language processing, information retrieval and machine learning. One of the most spectacular examples of the potential for applications of natural language processing (as well as information retrieval and machine learning) is IBM’s Watson computer system, which defeated the two most successful human contestants in Jeopardy! IBM has already begun to leverage this technology for a range of other applications.

Case Study 18: Serendipity

My guess is that the real killer app for memristors will be invented by a curious student who is just now deciding what EE courses to take. [Williams 2008]

In any commercial enterprise, basic research with no foreseeable bottom-line impact is always the hardest kind of R&D to justify. For this reason it is especially important to acknowledge the rare but transformative occasions when curiosity-driven research hits the jackpot. A beautiful recent example was the discovery of the memristor at HP Labs in 2008.

Stanley Williams had been hired by HP in 1995 to start a fundamental research group, in accordance with company founder David Packard’s belief that HP should “return knowledge to the well of fundamental science from which HP had been withdrawing for so long.” [Williams 2008].  A decade later, while studying approaches to molecular-scale memory, he accidentally created a device—a sandwich of titanium dioxide between two platinum electrodes—whose resistance changed according to the amount of charge that passed through it. In essence, its resistance preserves a memory of its past. This is the origin of the term “memristor.”

Perhaps the most amazing thing is that memristors had been predicted, as a purely mathematical construct, in a little-noticed paper by Leon Chua of UC Berkeley in 1971 [Chua, 1971]. They are the fourth basic passive circuit element, joining resistors, capacitors, and inductors. (A passive element is one that draws no energy.) The first three elements were discovered back in the 19th century, and are the basis for all of today’s electronics. Williams has stated that he would not have been able to understand what his lab had produced if he had not read Chua’s paper and thought deeply about it. Indeed, other researchers had noticed similar effects without understanding why.

At present, the main application envisaged for memristors, and the one that HP is betting on, is computer memory. A computer with memristor-based storage would not need to “boot up”—turn it on, and it would instantly remember where it was when you turned it off. Over the long term, as Williams’ quote above suggests, they might be used for something no one has thought of yet. For example, because memristors behave in a somewhat similar way to neurons, perhaps they would be the key to a true artificial brain. See [“Properties of memristors” 2011].

For R&D managers, the story of memristors has at least two lessons. First, basic research will pay off eventually… for somebody. And second, pay attention to mathematics.


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+