Serena is one of the PhD-students of the program. She is based in Offenburg in the group of Wolfgang Bessler (the deputy speaker of the RTG). Her research focusses on End-of-life prediction of a lithium-ion battery cell by studying the mechanistic ageing models of the graphite electrode among other things.
Mathematical modelling and numerical simulation have become standard techniques in Li-ion battery research and development, with the purpose of studying the issues of batteries, including performance and ageing, and consequently increasing the model-based predictability of life expectancy. Serena and others work on an electrochemical model of a graphite-based lithium-ion cell that includes combined ageing mechanisms:
The electrochemistry is coupled to a multi-scale heat and mass transport model based on a pseudo-3D approach.
A time-upscaling methodology is developed that allows to simulate large time spans (thousands of operating hours). The combined modeling and simulation framework is able to predict calendaric and cyclic ageing up to the end of life of the battery cells. The results show a qualitative agreement with ageing behavior known from experimental literature.
Serena has a Bachelor in Chemistry and a Master's in Forensic Chemistry from the University of Torino. She worked in Spain, the Politécnico de Torino and in Greece (there she was Marie Curie fellow at the Foundation for Research and Technology - Hellas) before she decided to spend time in Australia and India.
The person who pointed Gudrun in Magda's direction is Lennart Hilbert, a former co-worker of Magda in Dresden who is now working at KIT on Computational Architectures in the Cell Nucleus (he will be a podcast guest very soon).
On the Portrait of Science page one can find photographs of people from Dresden's Life Science campus. Apart from the photographs, one can also find their stories. How and why did they become scientists? What do they do, what are they passionate about? Magda invites us: "Forget the tubes and Erlenmeyer flasks. Science is only as good as the people who do it. So sit back, scroll down and get to know them looking through the lens of Magdalena Gonciarz. Have you ever wondered what kind of people scientists are? Would you like to know what are they working on? What drives and motivates them - spending days in the basement without the sun? Portrait of Science project aims at uncovering more about people who contribute to science at all levels - Research Group Leaders, Postdocs, PhD Students, Staff Scientists and Technicians. All of them are vital for progress of scientific research and all of them are passionate people with their own motivations."
When she started the Portrait of Science project, Magda challenged herself to take more pictures. She wanted to show the real people behind science and their personality. This was a creative task, quite different from her work as scientist - done with comparably little time. On top of taking the pictures, interviewees were asked to fill out a questionaire to accompany the story told by the photographs. Surprisingly, the stories told by her co-workers turned out to be quite inspiring. The stories told have shown the passion and the diverse motivations. People mentioned their failures as well. There were stories about accidents and their crucial role in carreers, about coincidence of finding a fascinating book or the right mentor - even as far back as in early childhood sometimes. Sharing ups and downs and the experience that there is a light at the end of the tunnel was a story she needed and which was worth to be shared. Knowing how hard scientific work can be, and how multiple friends and colleagues struggled more than she herself, Magda still strongly feels that it is useful to show that this is not a private and unique experience, but probably a part of the life of every scientist. This struggle can be overcome with time, effort, and help.
Magda comes from Poland. During her Master's studies, she had an opportunity to do a research placement at the University of Virginia. During that time she felt welcomed as part of a scientific community in which she wanted to stay. It was a natural decision to proceed with a PhD. She applied to the very prestigious Dresden International Graduate School for Biomedicine and Bioengineering and joined the biological research on proteins and their modifications in the lab of Jörg Mansfeld. After finishing her project, she decided to leave academia. Since 2018 she works for a learning and training agency CAST PHARMA and is involved in producing e-Learning solutions for pharmaceutical companies.
Magda also talked a bit about her PhD research. As we all know, genes code for proteins. However, one protein can exist in multiple different forms with multiple varying functions. A protein can be post-translationallly modified, i.e., modified after it is created in order to e.g., be relocated, have different interaction partners or become activated or destroyed in a manner of minutes. Recently, modern methods such as mass spectrometry, made it possible to see the multitude of post-translationally modified forms of proteins and allowed further research with the use of biochemistry or imaging techniques to gain insight into functions of these modifications, e.g., at different stages of the cell life.
Gudrun and Magda also talked about the challenge to make a broader audience understand what the particular research topic is all about. It is hard to refer to things we cannot see. It is often easier for people with more translatable research to connect it to various diseases, e.g., cancer but still creates a challenge for those working with more basic issues such as developmental biology.
What Magda took from her time in academia is much more than her results and her part in the basic research story. She feels that curiosity and quick learning skills are her superpowers. She is able to become familiar with any topic in a short amount of time. She can manage multiple parts of a project. Also she learned resilience and how to deal with challenges and failures on a daily basis, which can prove to be helpful in all areas of life.
At the moment, she is still making plans whether to continue the Portrait of Science in the future, maybe in a changed format.
In short the conversation covers the questions
The seminal work of Black and Scholes (1973) established the modern financial theory. In a Black-Scholes setting, it is assumed that the stock price follows a Geometric Brownian Motion with a constant drift and constant volatility. The stochastic differential equation for the stock price process has an explicit solution. Therefore, it is possible to obtain the price of a European call option in a closed-form formula. Nevertheless, there exist drawbacks of the Black-Scholes assumptions. The most criticized aspect is the constant volatility assumption. It is considered an oversimplification. Several improved models have been introduced to overcome those drawbacks. One significant example of such new models is the Heston stochastic volatility model (Heston, 1993). In this model, volatility is indirectly modeled by a separate mean reverting stochastic process, namely. the Cox-Ingersoll-Ross (CIR) process. The CIR process captures the dynamics of the volatility process well. However, it is not easy to obtain option prices in the Heston model since the model has more complicated dynamics compared to the Black-Scholes model.
In financial mathematics, one can use several methods to deal with these problems. In general, various stochastic processes are used to model the behavior of financial phenomena. One can then employ purely stochastic approaches by using the tools from stochastic calculus or probabilistic approaches by using the tools from probability theory. On the other hand, it is also possible to use Partial Differential Equations (the PDE approach). The correspondence between the stochastic problem and its related PDE representation is established by the help of Feynman-Kac theorem. Also in their original paper, Black and Scholes transferred the stochastic representation of the problem into its corresponding PDE, the heat equation. After solving the heat equation, they transformed the solution back into the relevant option price. As a third type of methods, one can employ numerical methods such as Monte Carlo methods.
Monte Carlo methods are especially useful to compute the expected value of a random variable. Roughly speaking, instead of examining the probabilistic evolution of this random variable, we focus on the possible outcomes of it. One generates random numbers with the same distribution as the random variable and then we simulate possible outcomes by using those random numbers. Then we replace the expected value of the random variable by taking the arithmetic average of the possible outcomes obtained by the Monte Carlo simulation. The idea of Monte Carlo is simple. However, it takes its strength from two essential theorems, namely Kolmogorov’s strong law of large numbers which ensures convergence of the estimates and the central limit theorem, which refers to the error distribution of our estimates.
Electricity markets exhibit certain properties which we do not observe in other markets. Those properties are mainly due to the unique characteristics of the production and consumption of electricity. Most importantly one cannot physically store electricity. This leads to several differences compared to other financial markets. For example, we observe spikes in electricity prices. Spikes refer to sudden upward or downward jumps which are followed by a fast reversion to the mean level. Therefore, electricity prices show extreme variability compared to other commodities or stocks.
For example, in stock markets we observe a moderate volatility level ranging between 1% and 1.5%, commodities like crude oil or natural gas have relatively high volatilities ranging between 1.5% and 4% and finally the electricity energy has up to 50% volatility (Weron, 2000). Moreover, electricity prices show strong seasonality which is related to day to day and month to month variations in the electricity consumption. In other words, electricity consumption varies depending on the day of the week and month of the year. Another important property of the electricity prices is that they follow a mean reverting process. Thus, the Ornstein-Uhlenbeck (OU) process which has a Gaussian distribution is widely used to model electricity prices. In order to incorporate the spike behavior of the electricity prices, a jump or a Levy component is merged into the OU process. These models are known as generalized OU processes (Barndorff-Nielsen & Shephard, 2001; Benth, Kallsen & Meyer-Brandis, 2007). There exist several models to capture those properties of electricity prices. For example, structural models which are based on the equilibrium of supply and demand (Barlow, 2002), Markov jump diffusion models which combine the OU process with pure jump diffusions (Geman & Roncoroni, 2006), regime-switching models which aim to distinguish the base and spike regimes of the electricity prices and finally the multi-factor models which have a deterministic component for seasonality, a mean reverting process for the base signal and a jump or Levy process for spikes (Meyer-Brandis & Tankov, 2008).
The German electricity market is one of the largest in Europe. The energy strategy of Germany follows the objective to phase out the nuclear power plants by 2021 and gradually introduce renewable energy ressources. For electricity production, the share of renewable ressources will increase up to 80% by 2050. The introduction of renewable ressources brings also some challenges for electricity trading. For example, the forecast errors regarding the electricity production might cause high risk for market participants. However, the developed market structure of Germany is designed to reduce this risk as much as possible. There are two main electricity spot price markets where the market participants can trade electricity. The first one is the day-ahead market in which the trading takes place around noon on the day before the delivery. In this market, the trades are based on auctions. The second one is the intraday market in which the trading starts at 3pm on the day before the delivery and continues up until 30 minutes before the delivery. Intraday market allows continuous trading of electricity which indeed helps the market participants to adjust their positions more precisely in the market by reducing the forecast errors.
Carlos is an electrical engineer from Colombia. His first degree is from Pontifcia Universidad Javeriana in Bogotá. For five years now he has been working at Schneider Electric in Berlin. In September 2018 Gudrun met Carlos at the EUREF-Campus in Berlin for discussing the work of Claire Harvey on her Master's thesis. The schedule on that day was very full but Gudrun and Carlos decided to have a Podcast conversation later.
Carlos came to Germany as a car enthusiast. Then he got excited about the possibilities of photovoltaic energy production. For that from 2005-2007 he studied in the Carl von Ossietzky Universität in Oldenburg in the PPRE Master course Renewable Energies. When he graduated within a group of about 20 master students they found a world ready for their knowledge. Carlos worked in various topics and in different parts of Germany in the field of renewable energies. Now, at Schneider he has the unique situation, that he can combine all his interests. He develops the most modern cars, which are driving with renewable energy. In the course of his work he is also back at his original love: working with electronics, protocols and data.
The work on the EUREF-Campus in Berlin started about 8-10 years ago with more questions than clear ideas. Schneider Electric is a big company with about 150.000 employees all over the world. They deal in all types of software and hardware devices for energy delivery. But the topic for Berlin was completely new: It was a test case how to construct energy sustainable districts.
They started out investing in e-mobility with renewable energy and making their own offices a smart building. It is a source of a lot of data telling the story how energy is produced and consumed. At the moment they collect 1GB data per day in the office building on about 12.000 measure points into database and build this as a benchmark to compare it to other scenarios. The next step now is also to find ways to optimize these processes with limited computational possibilities.
This is done with open source code on their own interface and at the moment it can optimize in the micro smart grid on the Campus. For example with 40 charging points for e-cars - consumption is planned according to production of energy. On Campus traditional batteries are used to buffer the energy, and also a bus now works on the Campus which can be discharged and is loaded without a cable!
One can say: Carlos is working in a big experiment. This does not only cover a lot of new technical solutions. The Energiewende is more than putting photovoltaic and wind power out. We as a society have to change and plan differently - especially concerning mobility.
Schneider Electric just started an expansion phase to the whole campus, which has a size of 5.5 ha and 2500 people working there. More than 100 charging point for e-cars will be available very soon.
Her study courses prepared her for very diverse work in the sector of renewable energy. Her decision to work with inno2grid in Berlin was based on the fact, that it would help to pave the way towards better solutions for planning micro grids and sustainable districts. Also, she wanted to see an actual micro grid at work. The office building of Schneider Electric, where the Startup inno2grid has its rooms is an experiment delivering data of energy production and consumption while being a usual office building. We will hear more about that in the episode with Carlos Mauricio Rojas La Rotta soon.
Micro grids are small scale electrical grid systems where self-sufficient supply is achieved. Therefore, the integration of micro grid design within district planning processes should be developed efficiently. In the planning process of districts with decentralised energy systems, unique and customised design of micro grids is usually required to meet local technical, economical and environmental needs. From a technical standpoint, a detailed understanding of factors such as load use, generation potential and site constraints are needed to correctly and most efficiently design and implement the network. The presence of many different actors and stakeholders contribute to the complexity of the planning process, where varying levels of technical experience and disparate methods of working across teams is commonplace.
Large quantities of digital information are required across the whole life-cycle of a planning project, not just to do with energetic planning but also for asset management and monitoring after a micro grid has been implemented. In the design of micro grids, large amounts of data must be gathered, there are initial optimization objectives to be met, and simulating control strategies of a district which are adapted to customer requirements is a critical step. Linking these processes - being able to assemble data as well as communicate the results and interactions of different "layers" of a project to stakeholders are challenges that arise as more cross-sector projects are carried out, with the growing interest in smart grid implementation.
Claire's thesis explores tools to assist the planning process for micro grids on the district scale. Using geographical information system (GIS) software, results relating to the energetic planning of a district is linked to geo-referenced data. Layers related to energy planning are implemented - calculating useful parameters and connecting to a database where different stakeholders within a project can contribute. Resource potential, electrical/thermal demand and supply system dimensioning can be calculated, which is beneficial for clients and decision makers to visualize digital information related to a project. Within the open source program QGIS, spatial analysis and optimizations relating to the design of an energy system are performed. As the time dimension is a key part in the planning of the energy supply system of a micro grid, the data is linked to a Python simulation environment where dynamic analysis can be performed, and the results are fed back in to the QGIS project.
SimScale is a cloud-based platform that gives instant access to computational fluid dynamics (CFD) and finite element analysis (FEA) simulation technology, helping engineers and designers to easily test performance, optimize durability or improve efficiency of their design. SimScale is accessible from a standard web browser and from any computer, eliminating the hurdles that accompany traditional simulation tools: high installation costs, licensing fees, deployment of high-performance computing hardware, and required updates and maintenance.
Via the platform, several state-of-the-art open solvers are made available like,e.g., OpenFOAM and Meshing with SnappyHexMesh. More information about the packages being used can be found at https://www.simscale.com/open-source/ .
On top of having easier access to open source software, the connected user forum is very active and helps everybody to enter the field even as a person without experience.
Founded in 2012 in Munich (Germany), nowadays SimScale is an integral part of the design validation process for many companies worldwide and individual users. It is mainly used by product designers and engineers working in Architecture, Engineering & Construction or Heating, Ventilation & Air-Conditioning. Also in the Electronics, Consumer Goods and Packaging and Containers industries SimScale is useful for testing and optimizing designs in the early development stages.
SimScale offers pricing plans that can be customized, from independent professionals to SMEs and multinational companies. The Community plan makes it possible to use SimScale for free, with 3000 core hours/year using up to 16 cloud computing cores.
The general structure and topics of the first year in Advanced Mathematics were already discussed in our episode 146 Advanced Mathematics with Jonathan Rollin.
This time Gudrun invited two students from her course to have the student's perspective, talking about mathematics, life, and everything.
Yueyang Cai grew up mostly in China. In 2015, the work of her mother led Yueyang to Stuttgart. While looking for opportunities to study a technical subject in Germany the English speaking program in Karlsruhe somehow suggested itself. After one year she is sure to have made the right decision.
The second student in the conversation is Siddhant Dhanrajani. His family is Indian but lives in Dubai. For that he got his education in Dubai in an Indian community follwowing the Indian educational system (CBSE). He had never heard of the Engineering program in Karlsruhe but found it through thourough research. He is really amazed at how such an excellent study program and such an excellent university as the KIT are not better known for their value in the world.
In the conversation both students talk about their education in their respective countries, their hopes and plans for the study course mechanical engineering and their experiences in the first year here in Karlsruhe. It is very interesting to see how the different ways to teach mathematics, namely, either as a toolbox full of recipes (which the students get well-trained in) or secondly as a way to approach problems in a context of a mathematical education contribute to an experience to be well-equipped to work creative and with a lot of potential as an engineer.
Though the students finished only the first year in a three years course they already work towards applications and necessary certificates for their possible master program after finishing the course in Karlsruhe.
The topic of the recorded conversation is dynamical sampling. The situation which Roza and other mathematician study is: There is a process which develops over time which in principle is well understood. In mathematical terms this means we know the equation which governs our model of the process or in other words we know the family of evolution operators. Often this is a partial differential equation which accounts for changes in time and in 1, 2 or 3 spatial variables. This means, if we know the initial situation (i.e. the initial conditions in mathematical terms), we can numerically calculate good approximations for the instances the process will have at all places and at all times in the future.
But in general when observing a process life is not that well sorted. Instead we might know the principal equation but only through (maybe only a few) measurements we can find information about the initial condition or material constants for the process. This leads to two questions: How many measurements are necessary in order to obtain the full information (i.e. to have exact knowledge)? Are there possibilities to choose the time and the spatial situation of a measurement so clever as to gain as much as possible new information from any measurement? These are mathematical questions which are answered through studying the equations.
The science of sampling started in the 1940s with Claude Shannon who found fundamental limits of signal processing. He developed a precise framework - the so-called information theory. Sampling and reconstruction theory is important because it serves as a bridge between the modern digital world and the analog world of continuous functions. It is surprising to see how many applications rely on taking samples in order to understand processes. A few examples in our everyday life are: Audio signal processing (electrical signals representing sound of speech or music), image processing, and wireless communication. But also seismology or genomics can only develop models by taking very intelligent sample measurements, or, in other words, by making the most scientific sense out of available measurements.
The new development in dynamical sampling is, that in following a process over time it might by possible to find good options to gain valuable information about the process at different time instances, as well as different spatial locations. In practice, increasing the number of spatially used sensors is more expensive (or even impossible) than increasing the temporal sampling density. These issues are overcome by a spatio-temporal sampling framework in evolution processes. The idea is to use a reduced number of sensors with each being activated more frequently. Roza refers to a paper by Enrique Zuazua in which he and his co-author study the heat equation and construct a series of later-time measurements at a single location throughout the underlying process. The heat equation is prototypical and one can use similar ideas in a more general setting. This is one topic on which Roza and her co-workers succeeded and want to proceed further.
After Roza graduated with a Ph.D. in Mathematics at the University of Vienna she worked as Assistant Professor at the University Ss Cyril and Methodius in Skopje (Macedonia), and after that at the Vanderbilt University in Nashville (Tennessee). Nowadays she is a faculty member of Ball State University in Muncie (Indiana).
They started the conversation with the question: What is algebraic geometry? It is a generalisation of what one learns in linear algebra insofar as it studies properties of polynomials such as its roots. But it considers systems of polynomial equations in several variables so-called multivariate polynomials. There are diverse applications in engineering, biology, statistics and topological data analysis. Among them Eliana is mostly interested in questions from computer graphics and statistics.
In any animated movie or computer game all objects have to be represented by the computer. Often the surface of the geometric objects is parametrized by polynomials. The image of the parametrization can as well be defined by an equation. For calculating interactions it can be necessary to know what is the corresponding equation in the three usual space variables. One example, which comes up in school and in the introductory courses at university is the circle. Its representation in different coordinate systems or as a parametrized curve lends itself to interesting problems to solve for the students.
Even more interesting and often difficult to answer is the simple question after the curve of the intersection of surfaces in the computer representation if these are parametrized objects. Moreover real time graphics for computer games need fast and reliable algorithms for that question. Specialists in computer graphics experience that not all curves and surfaces can be parametrized. It was a puzzling question until they talked to people working in algebraic geometry. They knew that the genus of the curve tells you about the possible vs. impossible parametrization.
For the practical work symbolic algebra packages help. They are based on the concept of the Gröbner basis. Gröbner basis help to translate between representations of surfaces and curves as parametrized objects and graphs of functions. Nevertheless, often very long polynomials with many terms (like 500) are the result and not so straightforward to analyse.
A second research topic of Eliana is algebraic statistics. It is a very recent field and evolved only in the last 20-30 years. In the typical problems one studies discrete or polynomial equations using symbolic computations with combinatorics on top. Often numerical algebraic tools are necessary. It is algebraic in the sense that many popular statistical models are parametrized by polynomials. The points in the image of the parameterization are the probability distributions in the statistical model. The interest of the research is to study properties of statistical models using algebraic geometry, for instance describe the implicit equations of the model.
Eliana already liked mathematics at school but was not always very good in it. When she decided to take a Bachelor course in mathematics she liked the very friendly environment at her faculty in the Universidad de los Andes, Bogotá. She was introduced to her research field through a course in Combinatorial commutative algebra there. She was encouraged to apply for a Master's program in the US and to work on elliptic curves at Binghamton University (State University of New York) After her Master in 2011 she stayed in the US to better understand syzygies within her work on a PhD at the University of Illinois at Urbana-Champaign. Since 2018 she has been a postdoc at the MPI MSI in Leipzig and likes the very applied focus especially on algebraic statistics.
In her experience Mathematics is a good topic to work on in different places and it is important to have role models in your field.
Lattice Boltzmann methods (LBM) are an established method of computational fluid dynamics. Also, the solution of temperature-dependent problems - modeled by the Boussinesq approximation - with LBM has been done for some time. Moreover, LBM have been used to solve optimization problems, including parameter identification, shape optimization and topology optimization. Usual optimization approaches for partial differential equations are strongly based on using the corresponding adjoint problem. Especially since this method provides the sensitivities of quantities in the optimization process as well. This is very helpful. But it is also very hard to find the adjoint problem for each new problem. This needs a lot of experience and deep mathematical understanding.
For that, Asher uses automatic differentiation (AD) instead, which is very flexible and user friendly. His algorithm combines an extension of LBM to porous media models as part of the shape optimization framework. The main idea of that framework is to use the permeability as a geometric design parameter instead of a rigid object which changes its shape in the iterative process. The optimization itself is carried out with line search methods, whereby the sensitivities are calculated by AD instead of using the adjoint problem.
The method benefits from a straighforward and extensible implementation as the use of AD provides a way to obtain accurate derivatives with little knowledge of the mathematical formulation of the problem. Furthermore, the simplicity of the AD system allows optimization to be easily integrated into existing simulations - for example in the software package OpenLB which Asher used in his thesis.
One example to test the algorithm is the shape of an object under Stokes flow such that the drag becomes minimal. It is known that it looks like an american football ball. The new algorithm converges fast to that shape.
This is the first of the two episodes from Oxford in 2018.
Roisin Hill works at the National University of Ireland in Galway on the west coast of Ireland. The university has 19.000 students and 2.000 staff. Roisin is a PhD student in Numerical Analysis at the School of Mathematics, Statistics & Applied Mathematics. Gudrun met her at her poster about Balanced norms and mesh generation for singularly perturbed reaction-diffusion problems. This is a collaboration with Niall Madden who is her supervisor in Galway.
The name of the poster refers to three topics which are interlinked in their research. Firstly, water flow is modelled as a singularly perturbed equation in a one-dimensional channel. Due to the fact that at the fluid does not move at the boundary there has to be a boundary layer in which the flow properties change. This might occur very rapidly. So, the second topic is that depending on the boundary layer the problem is singularly perturbed and in the limit it is even ill-posed. When solving this equation numerically, it would be best, to have a fine mesh at places where the error is large. Roisin uses a posteriori information to see where the largest errors occur and changes the mesh accordingly. To choose the best norm for errors is the third topic in the mix and strongly depends on the type of singularity.
More precisely as their prototypical test case they look for u(x) as the numerical solution of the problem
for given functions b(x) and f(x). It is singularly perturbed in the sense that the positive real parameter ε may be arbitrarily small. If we formally set ε = 0, then it is ill-posed. The numercial schemes of choice are finite element methods - implemented in FEniCS with linear and quadratic elements. The numerical solution and its generalisations to higher-dimensional problems, and to the closely related convection-diffusion problem, presents numerous mathematical and computational challenges, particularly as ε → 0. The development of algorithms for robust solution is the subject of intense mathematical investigation. Here “robust” means two things:
In order to measure the error, the energy norm sounds like a good basis - but as ε^2 → 0 the norm → 0 with order ε . They were looking for an alternative which they found in the literature as the so-called balanced norm. That remains O(1) as ε → 0. Therefore, it turns out that the balanced norm is indeed a better basis for error measurement.
After she finished school Roisin became an accountant. She believed what she was told: if you are good at mathematics, accountancy is the right career. Later her daughter became ill and had to be partially schooled at home. This was the moment when Roisin first encountered applied mathematics and fell in love with the topic. Inspired by her daughter - who did a degree in general science specialising in applied mathematics - Roisin studied mathematics and is a PhD student now (since Sept. 2017). Her enthusiasm has created impressive results: She won a prestigious Postgraduate Scholarship from the Irish Research Council for her four year PhD program.
Her first partner was Karen Page. She works in Mathematical Biology and is interested in mathematical models for pattern formation. An example would be the question why (and how) a human embryo develops five fingers on each hand. The basic information for that is coded into the DNA but how the pattern develops over time is a very complicated process which we understand only partly. Another example is the patterning of neurons within the vertebrate nervous system. The neurons are specified by levels of proteins. Binding of other proteins at the enhancer region of DNA decides whether a gene produces protein or not. This type of work needs a strong collaboration with biologists who observe certain behaviours and do experiments. Ideally they are interested in the mathematical tools as well.
One focus of Karen's work is the development of the nervous system in its embryonic form as the neural tube. She models it with the help of dynamical systems. At the moment they contain three ordinary differential equations for the temporal changes in levels of three proteins. Since they influence each other the system is coupled. Moreover a fourth protein enters the system as an external parameter. It is called sonic hedgehog (Shh). It plays a key role in regulating the growth of digits on limbs and organization of the brain. It has different effects on the cells of the developing embryo depending on its concentration.
Concerning the mathematical theory the Poincaré Bendixson theorem completely characterizes the long-time behaviour of two-dimensional dynamical systems. Working with three equations there is room for more interesting long-term scenarios. For example it is possible to observe chaotic behaviour.
Karen was introduced to questions of Mathematical Biology when starting to work on her DPhil. Her topic was Turing patterns. These are possible solutions to systems of Partial differential equations that are thermodynamically non-equilibrium. They develop from random perturbations about a homogeneous state, with the help of an input of energy.
Prof. Page studied mathematics and physics in Cambridge and did her DPhil in Oxford in 1999. After that she spent two years at the Institute for Advanced Study in Princeton and has been working at UCL since 2001.
She studied at the Charles University in Prague, and got her PhD in 2013 at the École normale supérieure de Cachan in Rennes. Her time in Germany started in 2013 when she moved to the Max Planck Institute for Mathematics in the Sciences in Leipzig as a postdoc.
Gudrun and Martina talk about randomness in the modeling of fluid motion. This topic is connected to the study of turbulent flow. Of course, we observe turbulence all around us, i.e. chaotic behaviour of the pressure and the velocity field in fluid flow. One example is the smoke pattern of a freshly extinguished candle. Its first part is laminar, then we observe transitional turbulent flow and fully turbulent one the further away the smoke travels. A second example is Rayleigh Bénard convection. Under the influence of a temperature gradient and gravity, one observes convection rolls when the temperature difference between bottom and top becomes large enough. If we look more closely, one can prescribe the motion as a mean flow plus random fluctuations. These fluctuations are difficult to measure but their statistical properties are reproduced more easily.
A general procedure in physics and science is to replace expensive time averages by ensemble averages, which can be calculated together on a parallel computer. The concept why this often works is the so-called ergodic hypothesis. To justify this from the mathematical side, the main problem is to find the right measure in the ensemble average.
In the model problem one can see that the solution is continuously dependent on the initial condition and the solution operator has a semigroup property. For random initial conditions, one can construct the solution operator correspondingly.
Already with this toy problem one sees that the justification of using ensemble averages is connected to the well-posedness of the problem. In general, this is not apriori known. The focus of Martina's work is to find the existence of steady solutions for the compressible flow system, including stochastic forces with periodic boundary conditions (i.e. on the torus).
At the moment, we know that there are global weak solutions but only local (in time) strong solutions.
It turned out that the right setting to study the problem are so-called dissipative martingale solutions: Unfortunately, in this setting, the velocity is not smooth enough to be a stochastic process. But the energy inequality can be proved. The proof rests on introducing artificial dissipation in the mass conservation, and a small term with higher order regularity for the density. Then, the velocity is approximated through a Faedo-Galerkin approximation and a lot of independent limiting processes can be carried out successfully.
The project is a collaboration with Dominic Breit and Eduard Feireisl.
Bruno spent his second master year at the Karlsruhe Institute of Technology (KIT). Gudrun had the role of his supervisor at KIT while he worked on his Master's thesis at the Chair of Renewable and Sustainable Energy Systems (ENS) at TUM in Garching. His direct contact person there was Franz Christange from the group of Prof. Thomas Hamacher.
Renewable energy systems are a growing part of the energy mix. In Germany between 1990 and 2016 it grew from 4168 GW to 104024 GW. This corresponds to an annual power consumption share of 3.4% and 31.7%, respectively. But in the related research this means a crucial shift. The conventional centralized synchronous machine dominated models have to be exchanged for decentralized power electronic dominated networks - so-called microgrids. This needs collaboration of mechanical and electrical engineers. The interdisciplinary group at TUM has the goal to work on modeling future microgrids in order to easily configure and simulate them.
One additional factor is that for most renewable energy systems it is necessary to have the right weather conditions. Moreover, there is always the problem of reliability. Especially for Photovoltaics (PV) and wind turbines Weather phenomena as solar irradiation, air temperature and wind speed have to be known in advance in order to plan for these types of systems.
There are two fundamentally different approaches to model weather data. Firstly the numerical weather and climate models, which provide the weather forecast for the next days and years. Secondly, so-called weather generators. The numerical models are very complex and have to run on the largest computer systems available. For that in order to have a simple enough model for planning the Renewable energy resources (RER) at a certain place weather generators are used. They produce synthetic weather data on the basis of the weather conditions in the past. They do not predict/forecast the values of a specific weather phenomenon for a specific time but provides random simulations whose outputs show the same or very similar distributional properties as the measured weather data in the past.
The group in Garching wanted to have a time dynamic analytical model. The model is time continuous which grant it the ability of having any time sampling interval. This means it wanted to have a system of equations for the generation of synthetic weather data with as few as possible parameters. When Bruno started his work, there existed a model for Garching (developped by Franz Christange) with about 60 parameters. The aim of Bruno's work was to reduce the number of parameters and to show that the general concept can be used worldwide, i.e. it can adapt to different weather data in different climate zones. In the thesis the tested points range from 33º South to 40º North.
In the synthesis of the weather generator the crucial tool is to use stochastic relations. Mostly the standard normal distribution is applied and shaped for the rate of change and corelation between RER. In particular this means that it describes the fundamental behavior of weather (mean, standard deviation, time- and cross-correlation) and introduces them into white noise in an analytical way. This idea was first introduced for crop estimation by Richardson in 1985. Time-dependence works on different time scales - through days and through seasons, e.g..
In the Analysis it is then necessary to parametrize the measured weather data and to provide a parameter set to the weather model.
Bruno started his Master course in Lisbon at Instituto Superior tecnico (IST). In his second year he changed to KIT in Karlsruhe and put his focus on Energy systems. In his thesis he uses a lot of mathematics which he learned during his Bachelor education and had to recall and refresh.
The results of the project are published in the open source model 'solfons' in Github, which uses Python and was developed in MATLAB.
For students starting an engineering study course it is clear, that a mathematical education will be an important part. Nevertheless, most students are not aware that their experiences with mathematics at school will not match well with the mathematics at university. This is true in many ways. Mathematics is much more than calculations. As the mathematical models become more involved, more theoretical knowledge is needed in order to learn how and why the calculations work. In particular the connections among basic ideas become more and more important to see why certain rules are valid. Very often this knowledge also is essential since the rules need to be adapted for different settings.
In their everyday work, engineers combine the use of well-established procedures with the ability to come up with solutions to yet unsolved problems. In our mathematics education, we try to support that skills insofar as we train certain calculations with the aim that they become routine for the future engineers. But we also show the ideas and ways how mathematicians came up with these ideas and how they are applied again and again at different levels of abstraction. This shall help the students to become creative in their engineering career.
Moreover seeing how the calculation procedures are derived often helps to remember them. So it makes a lot of sense to learn about proofs behind calculations, even if we usually do not ask to repeat proofs during the written exam at the end of the semester.
The course is structured as 2 lectures, 1 problem class and 1 tutorial per week. Moreover there is a homework sheet every week. All of them play their own role in helping students to make progress in mathematics.
The lecture is the place to see new material and to learn about examples, connections and motivations. In this course there are lecture notes which cover most topics of the lecture (and on top of that there are a lot of books out there!). So the lecture is the place where students follow the main ideas and take these ideas to work with the written notes of the lecture later on.
The theory taught in the lecture becomes more alive in the problem classes and tutorials. In the problem classes students see how the theory is applied to solve problems and exercises. But most importantly, students must solve problems on their own, with the help of the material from the lecture. Only in this way they learn how to use the theory. Very often the problems seem quite hard in the sense that it is not clear how to start or proceed. This is due to the fact that students are still learning to translate the information from the lecture to a net of knowledge they build for themselves. In the tutorial the tutor and the fellow students work together to find first steps onto a ladder to solving problems on the homework.
Gudrun and Jonathan love mathematics. But from their own experience they can understand why some of the students fear mathematics and expect it to be too difficult to master. They have the following tips:
In the lecture course, students see the basic concepts of different mathematical fields. Namely, it covers calculus, linear algebra, numerics and stochastics. Results from all these fields will help them as engineers to calculate as well as to invent. There is no standard or best way to organize the topics since there is a network of connections inbetween results and a lot of different ways to end up with models and calculation procedures. In the course in Karlsruhe in the first semester we mainly focus on calculus and touch the following subjects:
All of these topics have applications and typical problems which will be trained in the problem class. But moreover they are stepping stones in order to master more and more complex problems. This already becomes clear during the first semester but will become more clear at the end of the course.
Marie is Chief Research Scientist at the Norwegian research laboratory Simula near Oslo. She is Head of department for Biomedical Computing there. Marie got her university education with a focus on Applied Mathematics, Mechanics and Numerical Physics as well as her PhD in Applied mathematics at the Centre for Mathematics for Applications in the Department of Mathematics at the University of Oslo.
Her work is devoted to providing robust methods to solve Partial Differential Equations (PDEs) for diverse applications. On the one hand this means that from the mathematical side she works on numerical analysis, optimal control, robust Finite Element software as well as Uncertainty quantification while on the other hand she is very much interested in the modeling with the help of PDEs and in particular Mathematical models of physiological processes. These models are useful to answer What if type-questions much more easily than with the help of laboratory experiments.
In our conversation we discussed one of the many applications - Cerebral fluid flow, i.e. fluid flow in the context of the human brain.
Medical doctors and biologists know that the soft matter cells of the human brain are filled with fluid. Also the space between the cells contains the water-like cerebrospinal fluid. It provides a bath for human brain. The brain expands and contracts with each heartbeat and appoximately 1 ml of fluid is interchanged between brain and spinal area. What the specialists do not know is: Is there a circulation of fluid? This is especially interesting since there is no traditional lymphatic system to transport away the biological waste of the brain (this process is at work everywhere else in our body). So how does the brain get rid of its litter? There are several hyotheses:
The aim of Marie's work is to numerically test these (and other) hypotheses. Basic testing starts on very idalised geometries. For the overall picture one useful simplified geometry is the annulus i.e. a region bounded by two concentric circles. For the microlevel-look a small cube can be the chosen geometry.
As material law the flow in a porous medium which is based on Darcy flow is the starting point - maybe taking into account the coupling with an elastic behaviour on the boundary.
The difficult non-mathematical questions which have to be answered are:
In the near future she hopes to better understand the multiscale character of the processes. Here especially for embedding 1d- into 3d-geometry there is almost no theory available.
For the project Marie has been awarded a FRIPRO Young Research Talents Grant of the Research Council of Norway (3 years - starting April 2016) and the very prestegious ERC Starting Grant (5 years starting - 2017).
Maria Lopez-Fernandez from the University La Sapienza in Rome was one of the seven invited speakers. She got her university degree at the University of Valladolid in Spain and worked as an academic researcher in Madrid and at the University of Zürich.
Her field of research is numerical analyis and in particular the robust and efficient approximation of convolutions. The conversation is mainly focussed on its applications to wave scattering problems. The important questions for the numerical tools are: Consistency, stability and convergence analysis. The methods proposed by Maria are Convolution Quadrature type methods for the time discretization coupled with the boundary integral methods for the spatial discretization. Convolution Quadrature methods are based on Laplace transformation and numerical integration. They were initially mostly developed for parabolic problems and are now adapted to serve in the context of (hyperbolic) wave equations. Convolution quadrature methods introduce artificial dissipation in the computation, which stabilzes the numerics. However it would be physically more meaningful to work instead with schemes which conserve mass.
She is mainly interested in
The motivational example for her talk was the observation of severe acoustic problems inside a new building at the University of Zürich. Any conversation in the atrium made a lot of noise and if someone was speaking loud it was hard to understand by the others. An improvement was provided by specialised engineers who installed absorbing panels. From the mathematical point of view this is an nice application of the modelling and numerics of wave scattering problems. Of course, it would make a lot of sense to simulate the acoustic situation for such spaces before building them - if stable fast software for the distribution of acoustic pressure or the transport of signals was available.
The mathematical challenges are high computational costs, high storage requirements and and stability problems. Due to the nonlocal nature of the equations it is also really hard to make the calculations in parallel to run faster. In addition time-adaptive methods for these types of problems were missing completely in the mathematical literature. In creating them one has to control the numerical errors with the help of a priori and a posteriori estimates which due to Maria's and others work during the last years is in principle known now but still very complicated. Also one easily runs into stability problems when changing the time step size.
The acoustic pressure distribution for the new building in Zürich has been sucessfully simulated by co-workers in Zürich and Graz by using these results together with knowledge about the sound-source and deriving heuristic measures from that in order to find a sequence of time steps which keeps the problem stable and adapt to the computations effectively.
There is a lot of hope to improve the performance of these tools by representing the required boundary element matrices by approximations with much sparser matrices.
Gudrun and Constanza share that they are working in fields of mathematics strongly intertwined with physics. While Gudrun is interested in Mathematical Fluid dynamics, Constanza's field is Mathematical physics. Results in both fields very much rely on understanding the spectrum of linear (or linearized) operators. In the finite-dimensional case this means to study the eigenvalues of a matrix. They contain the essence of the action of the operator - represented by different matrices in differing coordinate systems. As women in academia and as female mathematicians Gudrun and Constanza share the experience that finding the essence of their actions in science and defining the goals worth to pursue are tasks as challenging as pushing science itself, since most traditional coordinate systems were made by male colleagues and do not work in the same way for women as for men. This is true even when raising own children does not enter the equation.
For that Constanza started to reach out to women in her field to speak about their mathematical results as well as their experiences. Her idea was to share the main findings in her blog with an article and her drawings. When reaching out to a colleague she sends a document explaining the goal of the project and her questions in advance. Constanza prepares for the personal conversation by reading up about the mathematical results. But at the same moment she is interested in questions like: how do you work, how do you come up with ideas, what do you do on a regular day, etc.
The general theme of all conversations is that a regular day does not exist when working at university. It seems that the only recurring task is daily improvisation on any schedule made in advance. One has to optimize how to live with the peculiar situation being pushed to handle several important tasks at once at almost any moment and needs techniques to find compromise and balance. An important question then is: how to stay productive and satisfied under these conditions, how to manage to stay in academia and what personal meaning does the word success then take. In order to distill the answers into a blog entry Constanza uses only a few quotes and sums up the conversation in a coherent text. Since she seeks out very interesting people, there is a lot of interesting material. Constanza focuses on the aspects that stay with her after a longer thought process. These ideas then mainly drive the blog article. Another part of the blog are two drawings: one portrait of the person and one which pictures the themes that were discussed and might not have made it into the text.
Surprisingly it turned out to be hard to find partners to talk to, and the process to make it a blog entry takes Constanza a year or longer. On the other hand, she feels very lucky that she found women which were very generous with their time and in sharing their experiences. Besides the engagement and love for what they do, all the participants had this in common: they were already promoting the participation of women in science. To learn from them as a younger researcher means, for example, to see the own impact on students and that building a community is very important, and a success in its own. Though Constanza invests a lot of time in the blog project, it is worth the effort since it helps her to work towards a future either in or outside academia.
Gudrun and Constanza found out that though both of their projects explore mathematical themes as well as people working in mathematics, the written parts of blog and podcast differ in that what makes it into the notes in Constanza's blog is, so to say, bonus material available only for the listening audience in Gudruns podcast (since it is never in the shownotes). In that sense, Gudrun's podcast and Constanza's blog are complementary views on the life of researchers.
Constanza did her undergraduate studies in La Serena in Chile. She started out with studying physics but soon switched to mathematics in order to understand the basics of physics. When she had almost finished her Masters program in La Serena she wanted to continue in science abroad. She was admitted to a french (one year) Master program at the University Paris 6 and later did her PhD in the nearby University Cergy-Pontoise. After that she applied for a Marie-Curie fellowship in order to continue her research in Germany. She spent time as postdoc at the Mittag-Leffler-Institut in Stockholm and at CAMTP in Maribor (Slovenia) before moving to the LMU Munich for two years with the fellowship. After that she got the position in Bonn and is now preparing for her next step.
Gudrun and Constanza want to thank Tobias Ried who put them in contact.
In his bachelor's thesis under supervision of Jan-Philipp Weiß, Pascal Kraft worked on the efficient computation of Julia Sets. In laymans terms you can describe these sets as follows: Some electronic calculators have the functions of repeating the last action if you press "=" or "enter" multiple times. So if you used the root function of your calculator on a number and now you want the root of the result you simply press "=" again. Now imagine you had a function on your calculater that didn't only square the input but also added a certain value - say 0.5. Then you put in a number, apply this function and keep repeating it over and over again. Now you ask yourself if you keep pressing the "="-button if the result keeps on growing and tends to infinity or if it stays below some threshold indefinitely.
Using real numbers this concept is somewhat boring but if we use complex numbers we find, that the results are astonishing.
To use a more precise definition: for a function , the Filled Julia Set is defined as the set of values , for whom the series stays bounded. The Julia Set is defined as the boundary of this set. A typical example for a suitable function in this context is . We now look at the complex plane where the x-axis represents the real part of a complex number and the y-axis its imaginary part. For each point on this plane having a coordinate we take the corresponding complex number and plug this value into our function and the results over and over again up to a certain degree until we see if this sequence diverges. Computing a graphical representation of such a Julia Set is a numerically costly task since we have no other way of determining its interior points other then trying out a large amount of starting points and seeing what happens after hundreds of iterations.
The results, however, turn out to be surprising and worth the effort. The geometric representations - images - of filled Julia Sets turn out to be very aesthetically pleasing since they are no simple compositions of elementary shapes but rather consist of intricate shapes and patterns. The reason for these beautiful shapes lie in the nature of multiplication and addition on the complex plane: A multiplication can be a magnification and down-scaling, mirroring and rotation, whereas the complex addition is represented by a translation on the complex plane. Since the function is applied over and over again, the intrinsic features are repeated in scaled and rotated forms over and over again, and this results in a self-similarity on infinite scales. In his bachelor's thesis, Pascal focussed on the efficient computation of such sets which can mean multiple things: it can either mean that the goal was to quickly write a program which could generate an image of a Julia Set, or that a program was sought which was very fast in computing such a program. Lastly it can also mean that we want to save power and seek a program which uses computational power efficiently to compute such an image, i.e. that consumes little energy. This is a typical problem when considering a numerical approach in any application and it arises very naturally here: While the computation of Julia Sets can greatly benefit from parallelization, the benefits are at loss when many tasks are waiting for one calculation and therefore the speedup and computational efficiency breaks down due to Amdahl's law.
The difference of these optimization criteria becomes especially obvious when we want to do further research ontop of our problem solver that we have used so far. The Mandelbrot Set for example is the set of values , for whom the Filled Julia Set is not equal to the Julia Set (i.e. the Filled Julia Set has interior points). One detail is important for the computation of either of these sets: If we check one single point we can never really say if it is inside the Filled Julia Set for sure (unless we can prove periodicity but that is not really feasible). What we can show however is, that if the magnitude of a point in the series of computations is above a certain bound, the results will tend to infinity from this point on. The approach is therefore to compute steps until either a maximum of steps is reached or a certain threshold is exceeded. Based on this assumption, we see that computing a point which lies inside the filled Julia Set is the bigger effort. So if computing a Julia Set for a given parameter is a lot of work, its complex parameter most likely lies inside the Mandelbrot Set (as we find many points for whom the computation doesn't abort prematurely and it is therefore likely that some of these points will be interior). If we want to draw the Mandelbrot Set based on this approach, we have to compute thousands of Julia Sets and if the computation of a single image was to take a minute this would not really be feasible anymore.
Since the computation of a Julia Set can even be done in a webbrowser these days, we include below a little tool which lets you set a complex parameter and compute four different Julia Sets. Have fun with our Interactive Julia Sets!
Photonic crystals are periodic dielectric media in which electromagnetic waves from certain frequency ranges cannot propagate. Mathematically speaking this is due to gaps in the spectrum of the related differential operators. For that an interesting question is if there are gaps inbetween bands of the spectrum of operators related to wave propagation, especially on periodic geometries and with periodic coeffecicients in the operator. It is known that the spectrum of periodic selfadjoint operators has bandstructure. This means the spectrum is a locally finite union of compact intervals called bands. In general, the bands may overlap and the existence of gaps is therefore not guaranteed. A simple example for that is the spectrum of the Laplacian in which is the half axis .
The classic approach to such problems in the whole space case is the Floquet–Bloch theory.
Homogenization is a collection of mathematical tools which are applied to media with strongly inhomogeneous parameters or highly oscillating geometry. Roughly spoken the aim is to replace the complicated inhomogeneous by a simpler homogeneous medium with similar properties and characteristics. In our case we deal with PDEs with periodic coefficients in a periodic geometry which is considered to be infinite. In the limit of a characteristic small parameter going to zero it behaves like a corresponding homogeneous medium. To make this a bit more mathematically rigorous one can consider a sequence of operators with a small parameter (e.g. concerning cell size or material properties) and has to prove some properties in the limit as the parameter goes to zero. The optimal result is that it converges to some operator which is the right homogeneous one. If this limit operator has gaps in its spectrum then the gaps are present in the spectra of pre-limit operators (for small enough parameter).
The advantages of the homogenization approach compared to the classical one with Floquet Bloch theory are:
An interesting geometry in this context is a domain with periodically distributed holes. The question arises: what happens if the sizes of holes and the period simultaneously go to zero? The easiest operator which we can study is the Laplace operator subject to the Dirichlet boundary conditions. There are three possible regimes:
A traditional ansatz in homogenization works with the concept of so-called slow and fast variables. The name comes from the following observation. If we consider an infinite layer in cylindrical coordinates, then the variable r measures the distance from the origin when going "along the layer", the angle in that plane, and z is the variable which goes into the finite direction perpendicular to that plane. When we have functions then the derivative with respect to r changes the power to while the other derivatives leave that power unchanged. In the interesting case k is negative and the r-derivate makes it decreasing even faster. This leads to the name fast variable. The properties in this simple example translate as follows. For any function we will think of having a set of slow and fast variables (characteristic to the problem) and a small parameter eps and try to find u as where in our applications typically . One can formally sort through the -levels using the properties of the differential operator. The really hard part then is to prove that this formal result is indeed true by finding error estimates in the right (complicated) spaces.
There are many more tools available like the technique of Tartar/Murat, who use a weak formulation with special test functions depending on the small parameter. The weak point of that theory is that we first have to know the resulat as the parameter goes to zero before we can to construct the test function. Also the concept of Gamma convergence or the unfolding trick of Cioranescu are helpful.
An interesting and new application to the mathematical results is the construction of wave guides. The corresponding domain in which we place a waveguide is bounded in two directions and unbounded in one (e.g. an unbounded cylinder).
Serguei Nazarov proposed to make holes in order to make gaps into the line of the spectrum for a specified wave guide. Andrii Khrabustovskyi suggests to distribute finitely many traps, which do not influence the essential spectrum but add eigenvalues. One interesting effect is that in this way one can find terms which are nonlocal in time or space and thus stand for memory effects of the material.
Liliana Augusto investigates filtering devices which work on a micro () and nano () level, and computes the pressure drop between in- and outlet of the filter as well as the collection efficiency. There is a research group conducting experimental setups for these problems, but her research group focuses specifically on mathematical modeling and computer simulation. Due to the small scale and nature of the experiments, one cannot easily take pictures from the physical filters by electronic microsopy, but it is indeed feasible to deduce some important characteristics and geometry such as the size of the fibres for proper modelling and simulation. Appropriate models for the small scale are mesoscopic like Lattice Boltzmann Model where microscopic models are very expensive- too expensive. She is busy with special boundary conditions necessary no-slip boundary condition on the macro scale has to be translated. There is a certain slip to be taken into account to align the results with experimental findings.
Lattice Boltzman methods are not very prominent in Brasil. She was looking for suitable partners and found the development group around OpenLB who had co-operations with Brazil. She tried to apply the software on the problem, and she found out about the possibility to work in Germany through a program of the Brasilian government. It is not so common to go abroad as a PhD-student in Brazil. She learnt a lot not only in an academical manner but highly recommends going abroad to experience new cultures as well.
She does not speak German- everything, from looking for partners to arriving in Germany, happened so fast that she could not learn the language beforehand. At the university, English was more than sufficient for scientific work, but she had difficulties finding a place to stay. In the end, she found a room in a student dorm with German students and a few other international students.
Andrea Bertozzi from the University of California in Los Angeles (UCLA) held a public lecture on The Mathematics of Crime. She has been Professor of Mathematics at UCLA since 2003 and Betsy Wood Knapp Chair for Innovation and Creativity (since 2012). From 1995-2004 she worked mostly at Duke University first as Associate Professor of Mathematics and then as Professor of Mathematics and Physics. As an undergraduate at Princeton University she studied physics and astronomy alongside her major in mathematics and went through a Princeton PhD-program. For her thesis she worked in applied analysis and studied fluid flow. As postdoc she worked with Peter Constantin at the University of Chicago (1991-1995) on global regularity for vortex patches. But even more importantly, this was the moment when she found research problems that needed knowledge about PDEs and flow but in addition both numerical analysis and scientific computing. She found out that she really likes to collaborate with very different specialists. Today hardwork can largely be carried out on a desktop but occasionally clusters or supercomputers are necessary.
The initial request to work on Mathematics in crime came from a colleague, the social scientist Jeffrey Brantingham. He works in Anthropology at UCLA and had well established contacts with the police in LA. He was looking for mathematical input on some of his problems and raised that issue with Andrea Bertozzi. Her postdoc George Mohler came up with the idea to adapt an earthquake model after a discussion with Frederic Paik Schoenberg, a world expert in that field working at UCLA. The idea is to model crimes of opportunity as being triggered by crimes that already happend. So the likelihood of new crimes can be predicted as an excitation in space and time like the shock of an earthquake. Of course, here statistical models are necessary which say how the excitement is distributed and decays in space and time. Mathematically this is a self-exciting point process.
The traditional Poisson process model has a single parameter and thus, no memory - i.e. no connections to other events can be modelled. The Hawkes process builds on the Poisson process as background noise but adds new events which then are triggering events according to an excitation rate and the exponential decay of excitation over time. This is a memory effect based on actual events (not only on a likelihood) and a three parameter model. It is not too difficult to process field data, fit data to that model and make an extrapolation in time. Meanwhile the results of that idea work really well in the field. Results of field trials both in the UK and US have just been published and there is a commercial product available providing services to the police.
In addition to coming up with useful ideas and having an interdisciplinary group of people committed to make them work it was necessery to find funding in order to support students to work on that topic. The first grant came from the National Science Foundation and from this time on the group included George Tita (UC Irvine) a criminology expert in LA-Gangs and Lincoln Chayes as another mathematician in the team.
The practical implementation of this crime prevention method for the police is as follows: Before the policemen go out on a shift they ususally meet to divide their teams over the area they are serving. The teams take the crime prediction for that shift which is calculated by the computer model on the basis of whatever data is available up to shift. According to expected spots of crimes they especially assign teams to monitor those areas more closely. After introducing this method in the police work in Santa Cruz (California) police observed a significant reduction of 27% in crime. Of course this is a wonderful success story. Another success story involves the career development of the students and postdocs who now have permanent positions. Since this was the first group in the US to bring mathematics to police work this opened a lot of doors for young people involved.
Another interesting topic in the context of Mathematics and crime are gang crime data. As for the the crime prediction model the attack of one gang on a rival gang usually triggers another event soon afterwards. A well chosen group of undergraduates already is mathematically educated enough to study the temporary distribution of gang related crime in LA with 30 street gangs and a complex net of enemies. We are speaking about hundreds of crimes in one year related to the activity of gangs. The mathematical tool which proved to be useful was a maximum liklihood penalization model again for the Hawkes process applied on the expected retaliatory behaviour.
A more complex problem, which was treated in a PhD-thesis, is to single out gangs which would be probably responsable for certain crimes. This means to solve the inverse problem: We know the time and the crime and want to find out who did it. The result was published in Inverse Problems 2011. The tool was a variational model with an energy which is related to the data. The missing information is guessed and then put into the energy . In finding the best guess related to the chosen energy model a probable candidate for the crime is found. For a small number of unsolved crimes one can just go through all possible combinations. For hundreds or even several hundreds of unsolved crimes - all combinations cannot be handled. We make it easier by increasing the number of choices and formulate a continuous instead of the discrete problem, for which the optimization works with a standard gradient descent algorithm.
A third topic and a third tool is Compressed sensing. It looks at sparsitiy in data like the probability distribution for crime in different parts of the city. Usually the crime rate is high in certain areas of a city and very low in others. For these sharp changes one needs different methods since we have to allow for jumps. Here the total variation enters the model as the -norm of the gradient. It promotes sparsity of edges in the solution. Before coming up with this concept it was necessary to cross-validate quite a number of times, which is computational very expensive. So instead of in hours the result is obtained in a couple minutes now.
When Andrea Bertozzi was a young child she spent a lot of Sundays in the Science museum in Boston and wanted to become a scientist when grown up. The only problem was, that she could not decide which science would be the best choice since she liked everything in the museum. Today she says having chosen applied mathematics indeed she can do all science since mathematics works as a connector between sciences and opens a lot of doors.
Since 2002 Anette Hosoi has been Professor of Mechanical Engineering at MIT (in Cambridge, Massachusetts). She is also a member of the Mathematical Faculty at MIT. After undergraduate education in Princeton she changed to Chicago for a Master's and her PhD in physics.
Anette Hosoi wanted to do fluid dynamics even before she had any course on that topic. Then she started to work as Assistant Professor at MIT where everyone wanted to build robots. So she had to find an intersection between fluid and roboters. Her first project were Robo-snailes with her student Brian Chan. Snails move using a thin film of fluid under their foot (and muscles). Since then she has been working on the fascinating boundary of flow and biomechanics.
At the BAM Colloquium she was invited for a plenary lecture on "Marine Mammals and Fluid Rectifiers: The Hydrodynamics of Hairy Surfaces". It started with a video of Boston dynamics which showed the terrific abilities some human-like robots have today. Nevertheless, these robots are rigid systems with a finite number of degrees of freedom. Anette Hosoi is working in control and fluid mechanics and got interested in soft systems in the context of robots of a new type. Soft systems are a completely new way to construct robots and for that one has to rethink everything from the bottom up.You are a dreamer she was told for that more than once.
For example Octopuses (and snails) move completely different to us and most animals the classcallly designed robots with two, four or more legs copy. At the moment the investigation of those motions is partially triggered by the plausible visualization in computer games and in animated movie sequences. A prominent example for that is the contribution of two mathematicians at UCLA to represent all interactions with snow in the animated movie Frozen. The short verison of their task was to get the physics right when snow falls off trees or people fall into snow - otherwise it just doesn't look right.
To operate robots which are not built with mechanical devices but use properties of fluids to move one needs valves and pumps to control flow. They should be cheap and efficient and without any moving parts (since moving parts cause problems). A first famous example for such component is a fluid rectifier which was patented by Nicola Tesla in the 1920ies. His device relied on inertia. But in the small devices as necessary for the new robots there are no inertia. For that Anette Hosoi and her group need to implement new mechnisms. A promising effect is elasticity - especially in channels. Or putting hair on the boundary of channels. Hair can cause asymmetric behaviour in the system. In one direction it bends easily with the flow while in the opposite direction it might hinder flow.
While trying to come up with clever ideas for the new type of robots the group found a topic which is present (almost) everywhere in biology - which means a gold mine for research and open questions. Of course hair is interacting with the flow and not just a rigid boundary and one has to admit that in real life applications the related flow area usually is not small (i.e. not negligible in modelling and computations). Mathematically spoken, the model needs a change in the results for the boundary layer. This is clear from the observations and the sought after applications. But it is clear from the mathematical model as well. At the moment they are able to treat the case of low Reynolds number and the linear Stokes equation which of course, is a simplification. But for that case the new boundary conditions are not too complicated and can be treated similar as for porous media (i.e. one has to find an effective permeability). Fortunately even analytic solutions could be calculated.
As next steps it would be very interesting to model plunging hairy surfaces into fluids or withdrawing hairy surfaces from fluids (which is even more difficult). This would have a lot of interesting applications and a first question could be to find optimal hair arrangements. This would mean to copy tricks of bat tongues like people at Brown University are doing.
A very well-known game is Tangram. Here a square is divided into seven pieces (which all are polygons). These pieces can be rearranged by moving them around on the table, e.g.. The task for the player is to form given shapes using the seven pieces – like a cat etc.. Of course the Tangram cat looks more like a flat Origami-cat. But we could take the Tangram idea and use thousands or millions of little pieces to build a much more realistic cat with them – as with pixels on a screen. In three dimensions one can play a similar game with pieces of a cube. This could lead to a LEGO-like three-dimensional cat for example. In this traditional Tangram game, there is no fundamental difference between the versions in dimension two and three.
But in 1914 it was shown that given a three-dimensional ball, there exists a decomposition of this ball into a finite number of subsets, which can then be rearranged to yield two identical copies of the original ball. This sounds like a magical trick – or more scientifically said – like a paradoxical situation. It is now known under the name Banach-Tarski paradox. In his lecture, Nicolas Monod dealt with the question: Why are we so surprised about this result and think of it as paradoxical?
One reason is the fact that we think to know deeply what we understand as volume and expect it to be preserved under rearrangements (like in the Tangram game, e.g.).Then the impact of the Banach-Tarski paradox is similar for our understanding of volume to the shift in understanding the relation between time and space through Einstein's relativity theory (which is from about the same time). In short the answer is: In our every day concept of volume we trust in too many good properties of it.
It was Felix Hausdorff who looked at the axioms which should be valid for any measure (such as volume). It should be independent of the point in space where we measure (or the coordinate system) and if we divide objects, it should add up properly. In our understanding there is a third hidden property: The concept "volume" must make sense for every subset of space we choose to measure. Unfortunately, it is a big problem to assign a volume to any given object and Hausdorff showed that all three properties cannot all be true at the same time in three space dimensions. Couriously, they can be satisfied in two dimensions but not in three.
Of course, we would like to understand why there is such a big difference between two and three space dimensions, that the naive concept of volume breaks down by going over to the third dimension. To see that let us consider motions. Any motion can be decomposed into translations (i.e. gliding) and rotations around an arbitrarily chosen common center. In two dimensions the order in which one performs several rotations around the same center does not matter since one can freely interchange all rotations and obtains the same result. In three dimensions this is not possible – in general the outcomes after interchanging the order of several rotations will be different. This break of the symmetry ruins the good properties of the naive concept of volume.
Serious consequences of the Banach-Tarski paradox are not that obvious. Noone really duplicated a ball in real life. But measure theory is the basis of the whole probability theory and its countless applications. There, we have to understand several counter-intuitive concepts to have the right understanding of probabilities and risk. More anecdotally, an idea of Bruno Augenstein is that in particle physics certain transformations are reminiscent of the Banach-Tarski phenomenon.
Nicolas Monod really enjoys the beauty and the liberty of mathematics. One does not have to believe anything without a proof. In his opinion, mathematics is the language of natural sciences and he considers himself as a linguist of this language. This means in particular to have a closer look at our thought processes in order to investigate both the richness and the limitations of our models of the universe.
References:
Helen Wilson always wanted to do maths and had imagined herself becoming a mathematician from a very young age. But after graduation she did not have any road map ready in her mind. So she applied for jobs which - due to a recession - did not exist. Today she considers herself lucky for that since she took a Master's course instead (at Cambridge University), which hooked her to mathematical research in the field of viscoelastic fluids. She stayed for a PhD and after that for postdoctoral work in the States and then did lecturing at Leeds University. Today she is a Reader in the Department of Mathematics at University College London.
So what are viscoelastic fluids? If we consider everyday fluids like water or honey, it is a safe assumption that their viscosity does not change much - it is a material constant. Those fluids are called Newtonian fluids. All other fluids, i.e. fluids with non-constant viscosity or even more complex behaviours, are called non-Newtonian and viscoelastic fluids are a large group among them.
Already the name suggests, that viscoelastic fluids combine viscous and elastic behaviour. Elastic effects in fluids often stem from clusters of particles or long polymers in the fluid, which align with the flow. It takes them a while to come back when the flow pattern changes. We can consider that as keeping a memory of what happened before. This behaviour can be observed, e.g., when stirring tinned tomato soup and then waiting for it to go to rest again. Shortly before it finally enters the rest state one sees it springing back a bit before coming to a halt. This is a motion necessary to complete the relaxation of the soup.
Another surprising behaviour is the so-called Weissenberg effect, where in a rotation of elastic fluid the stretched out polymer chains drag the fluid into the center of the rotation. This leads to a peak in the center, instead of a funnel which we expect from experiences stirring tea or coffee.
The big challenge with all non-Newtonian fluids is that we do not have equations which we know are the right model. It is mostly guess work and we definitely have to be content with approximations.
And so it is a compromise of fitting what we can model and measure to the easiest predictions possible. Of course, slow flow often can be considered to be Newtonian whatever the material is.
The simplest models then take the so-called retarded fluid assumption, i.e. the elastic properties are considered to be only weak. Then, one can expand around the Newtonian model as a base state.
The first non-linear model which is constructed in that way is that of second-order fluids. They have two more parameters than the Newtonian model, which are called normal stress coefficients. The next step leads to third-order fluids etc. In practice no higher than third-order fluids are investigated.
Of course there are a plethora of interesting questions connected to complex fluids. The main question in the work of Helen Wilson is the stability of the flow of those fluids in channels, i.e. how does it react to small perturbations? Do they vanish in time or could they build up to completely new flow patterns? In 1999, she published results of her PhD thesis and predicted a new type of instability for a shear-thinning material model. It was to her great joy when in 2013 experimentalists found flow behaviour which could be explained by her predicted instability.
More precisely, in the 2013 experiments a dilute polymer solution was sent through a microchannel. The material model for the fluid is shear thinning as in Helen Wilson's thesis. They observed oscillations from side to side of the channel and surprising noise in the maximum flow rate. This could only be explained by an instability which they did not know about at that moment. In a microchannel inertia is negligible and the very low Reynolds number of suggested that the instability must be caused by the non-Newtonian material properties since for Newtonian fluids instabilities can only be observed if the flow configuration exeeds a critical Reynolds number. Fortunately, the answer was found in the 1999 paper.
Of course, even for the easiest non-linear models one arrives at highly non-linear equations. In order to analyse stability of solutions to them one firstly needs to know the corresponding steady flow. Fortunately, if starting with the easiest non-linear models in a channel one can still find the steady flow as an analytic solution with paper and pencil since one arrives at a 1D ODE, which is independent of time and one of the two space variables.
The next question then is: How does it respond to small perturbation? The classical procedure is to linearize around the steady flow which leads to a linear problem to solve in order to know the stability properties. The basic (steady) flow allows for Fourier transformation which leads to a problem with two scalar parameters - one real and one complex. The general structure is an eigenvalue problem which can only be solved numerically. After we know the eigenvalues we know about the (so-called linear) stability of the solution.
An even more interesting research area is so-called non-linear stability. But it is still an open field of research since it has to keep the non-linear terms. The difference between the two strategies (i.e. linear and non-linear stability) is that the linear theory predicts instability to the smallest perturbations but the non-linear theory describes what happens after finite-amplitude instability has begun, and can find larger instability regions. Sometimes (but unfortunately quite rarely) both theories find the same point and we get a complete picture of when a stable region changes into an unstable one.
One other really interesting field of research for Helen Wilson is to find better constitutive relations. Especially since the often used power law has inbuilt unphysical behaviour (which means it is probably too simple). For example, taking a power law with negative exponent says that In the middle of the flow there is a singularity (we would divide by zero) and perturbations are not able to cross the center line of a channel.
Also, it is unphysical that according to the usual models the shear-thinning fluid should be instantly back to a state of high viscosity after switching off the force. For example most ketchup gets liquid enough to serve it only when we shake it. But it is not instantly thick after the shaking stops - it takes a moment to solidify. This behaviour is called thixotropy.
Josie Dodd finished her Master's in Mathematical and Numerical Modelling of the Atmosphere and Oceans at the University of Reading. In her PhD project she is working in the Mathematical Biology Group inside the Department of Mathematics and Statistics in Reading. In this group she develops models that describe plant and canopy growth of the Bambara Groundnut - especially the plant interaction when grown as part of a crop. The project is interdisciplinary and interaction with biologists is encouraged by the funding entity.
Why is this project so interesting? In general, the experimental effort to understand crop growth is very costly and takes a lot of time. So it is a great benefit to have cheaper and faster computer experiments. The project studies the Bambara Groundnut since it is a candidate for adding to our food supply in the future. It is an remarkably robust crop, draught tolerant and nitrogent inriching, which means the production of yield does not depend on fertilizer. The typical plant grows 150 days per year. The study will find results for which verfication and paramater estimations from actual green house data is available. On the other hand, all experience on the modelling side will be transferable to other plants up to a certain degree. The construction of the mathematical model includes finding equations which are simple enough but cover the main processes as well as numerical schemes which solve them effectively.
At the moment, temperature and solar radiation are the main input to the model. In the future, it should include rain as well. Another important parameter is the placement of the plants - especially in asking for arrangements which maximize the yield. Analyzing the available data from the experimental partners leads to three nonlinear ODEs for each plant. Also, the leave production has a Gaussian distribution relationship with time and temperature. The results then enter the biomass equation. The growth process of the plant is characterized by a change of the rate of change over time. This is a property of the plant that leads to nonlinearity in the equations.
Nevertheless, the model has to stay as simple as possible, while firstly, bridging the gap to complicated and more precise models, and secondly, staying interpretable to make people able to use it and understand its behaviour as non-mathematicians. This is the main group for which the models should be a useful tool.
So far, the model for interaction with neighbouring plants is the computational more costly part, where - of course - geometric consideration of overlapping have to enter the model. Though it does not yet consider many plants (since green house sized experimental data are available) the model scales well to a big number of plants due to its inherent symmetries. Since at the moment the optimizaition of the arrangements of plants has a priority - a lot of standardization and simplifying assumptions are applied. So for the future more parameters such as the input of water should be included, and it would be nice to have more scales. Such additional scales would be to include the roots system or other biological processes inside the plant.
Of course, the green house is well controlled and available field data are less precise due to the difficulty of measurements in the field.
During her work on the project and as a tutor Josie Dodd found out that she really likes to do computer programming. Since it is so applicable to many things theses skills open a lot of doors. Therefore, she would encourage everybody to give it a try.
We talked about the numerical treatment of complex geometries. The main problem is that it is difficult to automatically generate grids for computations on the computer if the shape of the boundary is complex. Examples for such problems are the simulation of airflow around airplanes, trucks or racing cars. Typically, the approach for these flow simulations is to put the object in the middle of the grid. Appropriate far-field boundary conditions take care of the right setting of the finite computational domain on the outer boundary (which is cut from an infinite model). Typically in such simulations one is mainly interested in quantities close to the boundary of the object.
Instead of using an unstructured or body-fitted grid, Sandra May is using a Cartesian embedded boundary approach for the grid generation: the object with complex geometry is cut out of a Cartesian background grid, resulting in so called cut cells where the grid intersects the object and Cartesian cells otherwise. This approach is fairly straightforward and fully automatic, even for very complex geometries. The price to pay comes in shape of the cut cells which need special treatment. One particular challenge is that the cut cells can become arbitrarily small since a priori their size is not bounded from below. Trying to eliminate cut cells that are too small leads to additional problems which conflict with the goal of a fully automatic grid generation in 3d, which is why Sandra May keeps these potentially very small cells and develops specific strategies instead.
The biggest challenge caused by the small cut cells is the small cell problem: easy to implement (and therefore standard) explicit time stepping schemes are only stable if a CFL condition is satisfied; this condition essentially couples the time step length to the spatial size of the cell. Therefore, for the very small cut cells one would need to choose tiny time steps, which is computationally not feasible. Instead, one would like to choose a time step appropriate for the Cartesian cells and use this same time step on cut cells as well.
Sandra May and her co-workers have developed a mixed explicit implicit scheme for this purpose: to guarantee stability on cut cells, an implicit time stepping method is used on cut cells. This idea is similar to the approach of using implicit time stepping schemes for solving stiff systems of ODEs. As implicit methods are computationally more expensive than explicit methods, the implicit scheme is only used where needed (namely on cut cells and their direct neighbors). In the remaining part of the grid (the vast majority of the grid cells), a standard explicit scheme is used. Of course, when using different schemes on different cells, one needs to think about a suitable way of coupling them.
The mixed explicit implicit scheme has been developed in the context of Finite Volume methods. The coupling has been designed with the goals of mass conservation and stability and is based on using fluxes to couple the explicit and the implicit scheme. This way, mass conservation is guaranteed by construction (no mass is lost). In terms of stability of the scheme, it can be shown that using a second-order explicit scheme coupled to a first-order implicit scheme by flux bounding results in a TVD stable method. Numerical results for coupling a second-order explicit scheme to a second-order implicit scheme show second-order convergence in the L^1 norm and between first- and second-order convergence in the maximum norm along the surface of the object in two and three dimensions.
We also talked about the general issue of handling shocks in numerical simulations properly: in general, solutions to nonlinear hyperbolic systems of conservation laws such as the Euler equations contain shocks and contact discontinuities, which in one dimension express themselves as jumps in the solution. For a second-order finite volume method, typically slopes are reconstructed on each cell. If one reconstructed these slopes using e.g. central difference quotients in one dimension close to shocks, this would result in oscillations and/or unphysical results (like negative density). To avoid this, so called slope limiters are typically used. There are two main ingredients to a good slope limiter (which is applied after an initial polynomial based on interpolation has been generated): first, the algorithm (slope limiter) needs to detect whether the solution in this cell is close to a shock or whether the solution is smooth in the neighborhood of this cell. If the algorithm thinks that the solution is close to a shock, the algorithm reacts and adjusts the reconstruted polynomial appropriately. Otherwise, it sticks with the polynomial based on interpolation. One commonly used way in one dimension to identify whether one is close to a shock or not is to compare the values of a right-sided and a left-sided difference quotient. If they differ too much the solution is (probably) not smooth there. Good reliable limiters are really difficult to find.
Sonia Fliss is interested in so-called transparent boundary conditions. These are the boundary conditions on the artificial boundaries with just the right properties. There are several classical methods like perfectly matched layers (PML) around the region of interest. They are built to absorb incoming waves (complex stretching of space variable). But unfortunately this does not work for non-homogeneous media.
Traditionally, also boundary integral equations were used to construct transparent boundary conditions. But in general, this is not possible for anisotropic media (or heterogenous media, e.g. having periodic properties).
The main idea in the work of Sonia Fliss is quite simple: She surrounds the region of interest with half spaces (three or more). Then, the solutions in each of these half spaces are determined by Fourier transform (or Floquet waves for periodic media, respectively). The difficulty is that in the overlap of the different half spaces the representations of the solutions have to coincide.
Sonia Fliss proposes a method which ensures that this is true (eventually under certain compatibility conditions). The chosen number of half spaces does not change the method very much. The idea is charmingly simple, but the proof that these solutions exist and have the right properties is more involved. She is still working on making the proofs more easy to understand and apply.
It is a fun fact, that complex media were the starting point for the idea, and only afterwards it became clear that it also works perfectly well for homogeneous (i.e. much less complex) media. One might consider this to be very theoretical result, but they lead to numerical simulations which match our expectations and are quite impressive and impossible without knowing the right transparent boundary conditions.
Sonia Fliss is still very fascinated by the many open theoretical questions. At the moment she is working at Ecole Nationale Supérieure des Techniques avancées (ENSTA) near Paris as Maitre de conférence.
For each species, the model contains the diffusion of a individual beings, the birth rate , the saturation rate or concentration , and the aggressiveness rate .
Starting from an initial condition, a distribution of and in the regarded domain, above equations with additional constraints for well-posedness will describe the future outcome. In the long run, this could either be co-existence, or extinction of one or both species. In case of co-existence, the question is how they will separate on the assumed radial bounded domain. For this, he adapted a moving plane method.
On a bounded domain, the given boundary conditions are an important aspect for the mathematical model: In this setup, a homogeneous Neumann boundary condition can represent a fence, which no-one, or no wolve, can cross, wereas a homogeneous Dirichlet boundary condition assumes a lethal boundary, such as an electric fence or cliff, which sets the density of living, or surviving, individuals touching the boundary to zero.
The initial conditions, that is the distribution of the wolf species, were quite general but assumed to be nearly reflectional symmetric.
The analytical treatment of the system was less tedious in the case of Neumann boundary conditions due to reflection symmetry at the boundary, similar to the method of image charges in electrostatics. The case of Dirichlet boundary conditions needed more analytical results, such as the Serrin's boundary point lemma. It turned out, that asymtotically in both cases the two species will separate into two symmetric functions. Here, Saldaña introduced a new aspect to this problem: He let the birth rate, saturation rate and agressiveness rate vary in time. This time-dependence modelled seasons, such as wolves behaviour depends on food availability.
The Lotka-Volterra model can also be adapted to a predator-prey setting or a cooperative setting, where the two species live symbiotically. In the latter case, there also is an asymptotical solution, in which the two species do not separate- they stay together.
Alberto Saldaña startet his academic career in Mexico where he found his love for mathematical analysis. He then did his Ph.D. in Frankfurt, and now he is a Post-Doc in the Mathematical Department at the University of Brussels.
On the other hand, it should be possible to study the electric field of point charges since this is how the electric field is created. One solution for this challenge is to slightly change the point of view in a way similar to special relativity theory of Einstein. There, instead of taking the momentum () as preserved quantity and Lagrange parameter the Lagrangian is changed in a way that the bound for the velocity (in relativity the speed of light) is incorporated in the model.
In the electromagnetic model, the Lagrangian would have to restrict the intensity of the fields. This was the idea which Borne and Infeld published already at the beginning of the last century. For the resulting system it is straightforward to calculate the fields for point charges. But unfortunately it is impossible to add the fields for several point charges (no superposition principle) since the resulting theory (and the PDE) are nonlinear. Physically this expresses, that the point charges do not act independently from each other but it accounts for certain interaction between the charges. Probably this interaction is really only important if charges are near enough to each other and locally it should be only influenced by the charge nearest. But it has not been possible to prove that up to now.
The electrostatic case is elliptic but has a singularity at each point charge. So no classical regularity results are directly applicable. On the other hand, there is an interesting interplay with geometry since the PDE occurs as the mean curvature equation of hypersurfaces in the Minkowski space in relativity.
The evolution problem is completely open. In the static case we have existence and uniqueness without really looking at the PDEs from the way the system is built. The PDE should provide at least qualitative information on the electric field. So if, e.g., there is a positive charge there could be a maximum of the field (for negative charges a minimum - respectively), and we would expect the field to be smooth outside these singular points. So a Lipschitz regular solution would seem probable. But it is open how to prove this mathematically.
A special property is that the model has infinitely many inherent scales, namely all even powers of the gradient of the field. So to understand maybe asymptotic limits in theses scales could be a first interesting step.
Denis Bonheure got his mathematical education at the Free University of Brussels and is working there as Professor of Mathematics at the moment.
On the other hand - taking the more particle centered point of view - we can try to model the reaction of the photons to certain stimuli.
The modelling is still in progress and explored in many different ways.
The main focus of our guest Claire Scheid who is working on nanophotonics is to solve the corresponding partial differential equations numerically. It is challenging that the nanoscale-photons have to be visible in a discretization for a makro domain. So one needs special ideas to have a geometrical description for changing properties of the material. Even on the fastest available computers it is still the bottleneck to make these computations fast and precise enough.
A special property which has to be reflected in the model is the delay in response of a photon to incoming light waves - also depending on the frequency of the light (which is connected to its velocity- also known as dispersion). So an equation for the the evolution of the electron polarization must be added to the standard model (which is the Maxwell system).
One can say that the model for the permeability has to take into account the whole history of the process. Mathematically this is done through a convolution operator in the equation. There is also the possibility to explain the same phenomenon in the frequency space as well.
In general the work in this field is possible only in good cooperation and interdisciplinary interaction with physicists - which also makes it especially interesting.
Since 2009 Claire Scheid works at INRIA méditerranée in Sophia-Antipolis as part of the Nachos-Team and is teaching at the university of Nice as a member of the Laboratoire Dieudonné.
She did her studies at the Ecole Normale Superieure in Lyon and later in Paris VI (Université Pierre et Marie Curie). For her PhD she changed to Grenoble and spent two years as Postdoc at the university in Oslo (Norway).
Marie Kray works in the Numerical Analysis group of Prof. Grote in Mathematical Department of the University of Basel. She did her PhD 2012 in the Laboratoire Jacques-Louis Lions in Paris and got her professional education in Strasbourg and Orsay.
Since boundaries occur at the surface of volumes, the boundary manifold has one spatial dimension less than the actual regarded physical domain. Therefore, the treatment of normal derivatives as in the Neumann boundary condition needs special care.
The implicit Crank-Nicolson method turned out to be a good numerical scheme for integrating the time derivative, and an upwinding scheme solved the discretized hyperbolic problem for the space dimension.
An alternative approach to separate the signals from several point sources or scatterers is to apply global integral boundary conditions and to assume a time-harmonic representation.
The presented methods have important applications in medical imaging: A wide range of methods work well for single scatterers, but Tumors often tend to spread to several places. This serverely impedes inverse problem reconstruction methods such as the TRAC method, but the separation of waves enhances the use of these methods on problems with several scatterers.
Scattering is a phenomenon in the propagation of waves. An interesting example from our everyday experience is when sound waves hit obstacles the wave field gets distorted. So, in a way, we can "hear" the obstacle.
Sound waves are scalar, namely, changes in pressure. Other wave types scatter as well but can have a more complex structure. For example, seismic waves are elastic waves and travel at two different speeds as primary and secondary waves.
As mathematician, one can abandon the application completely and say a wave is defined as a solution of a Wave Equation. Hereby, we also mean finding these solution in different appropriate function spaces (which represent certain properties of the class of solutions), but it is a very global look onto different wave properties and gives a general idea about waves. The equations are treated as an entity of their own right. Only later in the process it makes sense to compare the results with experiments and to decide if the equations fit or are too simplified.
Prof. Sayas startet out in a "save elliptic world" with well-established and classical theories such as the mapping property between data and solutions. But for the study of wave equations, today there is no classical or standard method, but very many different tools are used to find different types of results, such as the preservation of energy. Sometimes it is obvious, that the results cannot be optimal (or sharp) if e.g. properties like convexity of obstacles do not play any role in getting results. And many questions are still wide open. Also, the numerical methods must be well designed.
Up to now, transient waves are the most challenging and interesting problem for Prof. Sayas. They include all frequencies and propagate in time. So it is difficult to find the correct speed of propagation and also dispersion enters the configuration. On the one hand, the existence and regularity together with other properties of solutions have to be shown, but on the other hand, it is necessary to calculate the propagation process for simulations - i.e. the solutions - numerically.
There are many different numerical schemes for bounded domains. Prof. Sayas prefers FEM and combines them with boundary integral equations as representative for the outer domain effects. The big advantage of the boundary integral representation is that it is physical correct but unfortunately, it is very complicated and all points on the boundary are interconnected.
Finite Elements fit well to a black box approach which leads to its popularity among engineers. The regularity of the boundary can be really low if one chooses Galerkin methods. The combination of both tools is a bit tricky since the solver for the Wave Equations needs data on the boundary which it has to get from the Boundary element code and vice versa. Through this coupling it is already clear that in the coding the integration of the different tools is an important part and has to be done in a way that all users of the code which will improve it in the future can understand what is happening.
Prof. Sayas is fascinated by his research field. This is also due to its educational aspect: the challenging mathematics, the set of tools still mainly unclear together with the intensive computational part of his work. The area is still wide open and one has to explain mathematics to other people interested in the results.
In his carreer he started out with studying Finite Elements at the University in Zaragoza and worked on boundary elements with his PhD-supervisor from France. After some time he was looking for a challenging new topic and found his field in which he can combine both fields. He has worked three years at the University of Minnesota (2007-2010) and decided to find his future at a University in the U.S.. In this way he arrived at the University of Delaware and is very satisfied with the opportunities in his field of research and the chances for young researchers.
He is also Professor in leave of Applied Mathematics at the Universidad Autónoma de Madrid (UAM) and a Humboldt Awardee at the University of Erlangen-Nuremberg (FAU) as well. He was invited by the PDE-group of our Faculty in Karlsruhe to join our work on Wave Phenomena for some days in May 2015.
In our conversation he admits that waves have been holding his interest since his work as a PhD student in Paris at the Université Pierre-et-Marie-Curie in the world famous group of Jacques-Louis Lions.
Indeed, waves are everywhere. They are visible in everything which vibrates and are an integral part of life itself. In our work as mathematician very often the task is to influence waves and vibrating structures like houses or antennae such that they remain stable. This leads to control problems like feedback control for elastic materials.
In these problems it is unavoidable to always have a look at the whole process. It starts with modelling the problem into equations, analysing these equations (existence, uniqueness and regularity of solutions and well-posedness of the problem), finding the right numerical schemes and validating the results against the process which has been modelled. Very often there is a large gap between the control in the discrete process and the numerical approximation of the model equations and some of these differences are explainable in the framework of the theory for hyperbolic partial differential equations and not down to numerical or calculation errors.
In the study of Prof. Zuazua the interaction between the numerical grid and the propagation of waves of different frequencies leads to very intuitive results which also provide clear guidelines what to do about the so-called spurious wave phenomena produced by high frequencies, an example of which is shown in this podcast episode image.
This is an inherent property of that sort of equations which are able to model the many variants of waves which exist. They are rich but also difficult to handle. This difficulty is visible in the number of results on existence, uniqueness and regularity which is tiny compared to elliptic and parabolic equations but also in the difficulty to find the right numerical schemes for them. On the other hand they have the big advantage that they are best suited for finding effective methods in massively parallel computers. Also there is a strong connection to so-called Inverse Problems on the theoretical side and through applications where the measurement of waves is used to find oil and water in the ground, e.g (see, e.g. our Podcast Modell004 on Oil Exploration).
Prof. Zuazua has a lot of experience in working together with engineers. His first joint project was shape optimization for airfoils. The geometric form and the waves around it interact in a lot of ways and on different levels. Also water management has a lot of interesting and open questions on which he is working with colleagues in Zaragoza. At the moment there is a strong collaboration with the group of Prof. Leugering in Erlangen which is invested in a Transregio research initiative on gasnets which is a fascinating topic ranging from our everyday expectations to have a reliable water and gas supply at home to the latest mathematical research on control. Of course, in working with engineers there is always a certain delay (in both directions) since the culture and the results and questions have to be translated and formulated in a relevant form between engineers and mathematicians.
In dealing with theses questions there are two main risks: Firstly, one finds wrong results which are obviously wrong and secondly wrong results which look right but are wrong nonetheless. Here it is the crucial role of mathematicians to have the right framework to find these errors.
Prof. Zuazua is a proud Basque. Of the 2.5 Mill. members of the basque people most are living in Spain with a minority status of their culture and language. But since the end of the Franco era this has been translated into special efforts to push culture and education in the region. In less than 40 years this transformed the society immensely and led to modern universities, relevant science and culture which grew out of "nothing". Now Spain and the Basque country have strong bonds to the part of Europe on the other side of the Pyrenees and especially with industry and research in Germany. The Basque university has several campuses and teaches 40.000 students. This success could be a good example how to extend our education system and provide possibilities for young people which is so much a part of our culture in Europe across the boundaries of our continent.
]]>