You are currently browsing the tag archive for the ‘math’ tag.

My blog doesn’t really have a theme; it’s just a random collection of things that are going on in my life that I want to share. I keep the topics pretty broad so more people will be inclined to read. As a result, the math posts tend to be low. I don’t want my blog to be a “math blog” (you can find a list of such blogs here, or just google “math blogs”). It should be noted that online collaborative mathematics has become quite popular recently. Sites like Math Overflow and Terence Tao’s blog are great examples.

Having said that, I’ve decided that my next few posts will be more math related; in particular, I’ll be posting a couple of algebra theorems each week for the next few weeks. Here’s why. The algebra qualifying exam is in about a month and I’ve been studying really hard for it. I know that some of the other TA’s are studying really hard for it too; and since some of them occasionally glance at my blog (I know, I can’t believe it either), I think it’s worth while to share some of the key algebra theorems.

For the mathematically inclined, please feel free to comment and critique any of the results you see posted. A lot of the proofs are my own, which means there might (will) be errors. Enjoy.

Advertisements

I compiled this post from sources I found online, along with some of my own thoughts. Being the poor grad student that I am, I failed to site my sources; so I can’t take full credit for writing this.

Why study mathematics?

Mathematics is more than just the science of numbers taught by teachers in schools and either enjoyed or feared by many students. It plays a significant role in the lives of individuals and the world of society as a whole. Mathematics is an essential discipline recognized worldwide, and it needs to be augmented in education to equip students with skills necessary for achieving higher education, career aspirations, and for attaining personal fulfillment. Its significance to education is not limited to the following aspects.

Enhances problem solving and analysis skills. Mathematics enhances our logical, functional and aesthetic skills. Problems enable us to apply our skills to both familiar and unfamiliar situations, thereby giving us the ability to use tested theory and also create our own before applying them. By developing problem solving strategies, we learn to understand problems, devise plans, carry out plans, analyze and review the accuracy of our solutions. The methods involved in problem solving develop use of reasoning, careful and reasonable argument, and decision making.

Applied in daily life. Mathematics is not a mere subject that prepares students for higher academic at- tainment or job qualification in the future. It is not all about practicing calculations in algebra, statistics and algorithms that, after all, computers are capable of doing. It is more about how it compels the human brain to formulate problems, theories and methods of solutions. It prepares us to face a variety of simple to multifaceted challenges every human being encounters on a daily basis. Irrespective of your status in life and however basic your skills are, you apply mathematics. Daily activities including the mundane things you do are reliant on how to count, add or multiply. You encounter numbers every day in memorizing phone numbers, buying groceries, cooking food, balancing a budget, paying bills, estimating gasoline consumption, measuring distance and managing your time. In the fields of business and economy, including the diverse industries existing around you, basic to complex math applications are crucial.

Base for all technologies. Anywhere in the world, mathematics is employed as a key instrument in a diversity of fields such as medicine, engineering, natural science, social science, physical science, tech science, business and commerce, etc. Application of mathematical knowledge in every field of study and industry produces new discoveries and advancement of new disciplines. All innovations introduced worldwide, every product of technology that man gets pleasure from is a byproduct of Science and Math. The ease and convenience people enjoy today from the discoveries of computers, automobiles, aircraft, household and personal gadgets would never have happened if it were not for this essential tool used in technology.

Career aspirations. Every branch of Mathematics has distinct applications in different types of careers. The skills enhanced from practicing math such as analyzing patterns, logical thinking, problem solving and the ability to see relationships can help you prepare for your chosen career and enable you to compete for interesting and high-paying jobs against people around the globe. Even if you do not take up math-intensive courses, you have the edge to compete against other job applicants if you have a strong mathematical background, as industries are constantly evolving together with fast-paced technology.

Since mathematics encompasses all aspects of human life, it is unquestionably important in education to help students and all people from all walks of life perform daily tasks efficiently and become productive, well-informed, functional, independent individuals and members of a society where Math is a fundamental component.

This is a classic result in differential geometry and is worth mentioning in these posts on minimal surfaces. Before we can talk about the deformation we need a definition.

Definition. A minimal surface described by the Weierstrass-Enneper data (f,g) or F(\tau) has an associated family of minimal surfaces given by, respectively, (e^{it}f,g) or e^{it}F(\tau).

The catenoid has Weierstrass-Enneper representation (f,g)=(-\frac{e^{-z}}{2},-e^z). Thus, the associated family of surfaces of the catenoid has Weierstrass-Enneper representation (f,g)=(-\frac{e^{-z}}{2}e^{it},-e^z), which corresponds to the following standard parametrization.

\textbf{x}(u,v)=(x^1(u,v),x^2(u,v),x^3(u,v)), for any fixed t, where

x^1(u, v) = \cos(t)\cos(v)\cosh(u) + \sin(t)\sin(v)\sinh(u)

x^2(u, v) = \cos(t)\cosh(u)\sin(v) - \cos(v)\sin(t)\sinh(u)

x^3(u, v) = u\cos(t) + v\sin(t)

A very beautiful result in minimal surface theory. The catenoid can be continuously deformed into the helicoid by the transformation given above, where t=0 represents the catenoid and t=\frac{\pi}{2} represents the helicoid. It should be pointed out that the parametrization above represents a minimal surface for any value of t. That is, any surface in the associated family of a minimal surface is also minimal.

The surfaces below, plotted for different values of t, represent the associated family of minimal surfaces of the catenoid.

I’ve spent a few posts talking about the theory behind minimal surfaces. So what? Lets actually look at some.

Prior to the 18th century the only known minimal surface was the plane. This changed when Jean Baptiste Meusnier discovered the first non-planar minimal surfaces, the catenoid and the helicoid.

The catenoid may be parametrized as \textbf{x}(u,v)=(a\cosh(v)\cos(u),a\cosh(v)\sin(u),av). This is a surface of revolution generated by rotating the catenary y=a\cosh(\frac{z}{a}) about the z-axis. It is easily checked that the mean curvature of \textbf{x}(u,v) is zero. Thus, the catenoid is a minimal surface. It can be characterized as the only surface of revolution which is minimal. That is, if a surface of revolution M is a minimal surface then M is contained in either a plane or a catenoid.

the catenoid

The helicoid may be parametrized as \textbf{x}(u,v)=(a\sinh(v)\cos(u),a\sinh(v)\sin(u),au). It is easily checked that the mean curvature of \textbf{x}(u,v) is zero. Thus, the helicoid is a minimal surface. It can be characterized as the only minimal surface, other than the plane, which is also a ruled surface. That is, any ruled minimal surface in \mathbb{R}^3 is part of a plane or a helicoid.

the helicoid

Placing geometric restrictions on surfaces is a common theme in classification in the study of minimal surfaces. For example, assuming a surface has a parametrization of the form \textbf{x}(u,v)=g(u)+h(v), one can explicitly solve the resulting differential equation to find the minimal surface solution f(x,y)=\frac{1}{a}\ln (\frac{\cos ax}{\cos ay}), which locally parametrizes Scherk’s minimal surface (discovered by Heinrich Ferdinand Scherk in 1835). Note that although this surface can be realized locally as a graph, its domain of definition is not the entire plane as it must be represented on patches of the form -\frac{\pi}{2}<\frac{\pi}{2} and –\frac{\pi}{2}<\frac{\pi}{2}.

Scherk's surface

This problem was asked in a job interview for a software engineering position at Google.

There is a staircase with 100 steps. How many ways can you walk from the bottom to the top of the staircase if you are only allowed to take one step and two steps at a time?

As the rainy season is upon us, you may be asking yourself this question. Let’s try to analyze this problem mathematically. Note, this was taken from an article I found here, I have merely reproduced and simplified it a bit for your reading pleasure.

The formal solution looks something like this

\displaystyle \frac{dw}{dt}=-\int\rho v\;dA.

Which looks a bit menacing. Here, \frac{dw}{dt} is the rate you’re getting wet (mass of rain per time incident on your body), \rho is the density of the rain shower (mass of water in unit volume of atmosphere), v is the velocity of the rain relative to you, and dA represents a little bit of your body surface.

The relative velocity of the rain depends on the rain’s velocity, and your own velocity. This is where we can introduce the possibility of someone running around in the rain. The relative rain velocity, v, is equal to the true velocity of the rain minus the velocity of your body. We can now put these in the above equation and write

\displaystyle \frac{dw}{dt}=\int\rho(v_p-v_r)\;dA

where v_p is the velocity of the person and v_r is the velocity of the rain. (They’re not the wrong way around, because we dropped the minus sign in front of the integral.)

SO WHAT?
Precisely! The problem with a solution like this, is that although it is designed to be exactly correct, it is far too complicated to be of much use – because it can’t easily be calculated.

For a start, the shape of a human body is too complex, and all parts of it are in different states of motion when running. To get some answers the formal solution must be simplified by making some assumptions and approximations. Physicists do this all the time – it is called “cheating”.

AN APPROXIMATE SOLUTION
This is where the fun starts. To get some idea of how running around in the rain affects wetness, we’ll need to make some fairly significant simplifications.

We will assume that the rain is falling vertically and also that the person is running horizontally. To get around the problem of our complex body shape, we’ll imagine our person as a rectangular block – like a house brick standing on end. The smaller top surface of this “brick” is of area a and represents all our own top surfaces (head and shoulders.) The larger front surface of this brick is of area A and represents our front surfaces (chest, stomach, front of arms, front of legs etc.) This approximation won’t give us the complete truth – but it might provide some insight into what is going on.

This enables us to produce our first “total wetness” equation. It can be derived from the formal solution above, or worked out by other reasoning. Anyway, here it goes.

THE (SIMPLIFIED) TOTAL WETNESS EQUATION

\displaystyle w=\rho(av_r+Av_p)t

Here w is the “total wetness” (the total mass of rainwater on your body), \rho is the rain shower density as before, a is our top surface area, A is our front surface area v_r is the rain velocity, v_p is the person’s velocity, and t is the time spent out in the rain.

Looking at the equation, it’s clear that there is little we can do about the rain velocity, rain density and the size of our bodies (except by dieting). The only quantities we can directly control in the total wetness equation are t (the time spent in the rain) and v_p (how fast we’re running).

The equation tells us quite clearly that we get most wet if we: 1. stay out in the rain for a long time (no surprise there) 2. run very fast.

So running fast actually makes us wetter according to this analysis – the reason being that you are moving your front surface through the “rain field”, scooping up water as you go.

By the way, should you ever want to get really wet, the equation suggests you should stay out in the rain for a long time whilst running around like a maniac.

There’s more to it than this though. Although running fast looks like a bad idea, what if we are running towards shelter – surely by running we will minimise the time spent in the rain? This is a fair point, and makes the first equation look incorrect – but in fact it is fine.

This is because the equation “knows” nothing about the possibility of shelter. It simply tells us that if you’re in the rain, the best thing to do is stand still. However, we can introduce the idea of shelter into it to get some further advice.

Let’s assume that when it starts to rain, you identify the nearest shelter and run towards it. If the distance to the shelter is d, then the time spent in the rain (t in the above equation) will be \frac{d}{v_p}.

If we insert this into the “total wetness” equation to replace t, we get the “modified simplified total wetness equation” which now includes the distance to the shelter d.

\displaystyle w=\rho\left(a\frac{v_r}{v_p}+A\right)d

So here we have it – more mathematical advice to avoid getting wet. Because we divide by v_p in this equation, maximising our velocity now emerges as a good idea, assuming there is a shelter available.

SO SHOULD I RUN IN THE RAIN OR NOT?
When it starts to rain, first identify the nearest shelter, and then run to it as quickly as you can.

This is remarkable, because that is precisely what most people do! The power of mathematics has finally given us the reassurance that, when we run for that bus shelter, store canopy or random shop (and start pretending to browse), we are getting it exactly right!

However, the equation shows that you get wet no matter how fast you run, with a minimum value of w=\rho Ad.

PS: If the rain is falling at an angle it is possible to decrease your total wetness by running in the correct direction. Unfortunately this may not coincide with the nearest shelter direction.

PPS: Alternatively, ignore the math and get an umbrella.

I couldn’t put it any better. This is what I love about math education and why it is so important. You can check out his blog here.

As a part of my measure theory class each student must lecture on a certain topic. Today I lectured on the Jordan decomposition and the Radon-Nikodym Theorem. Lecturing to freshman precalculus students is one thing, lecturing to your peers and professor about measure theory is another. You encourage precalc students to ask questions and be engaged with the lecture. As the teacher you try to make the lecture more of a discussion. This is something you can do because you know and understand the material. However, this can be difficult when you don’t fully understand the subject (like measure theory). The other TA’s and I like to joke about our measure theory lectures, fake it ’til you make it. We might not have a mastery understanding of the material (yet), but we have the fundamentals. From there it’s all about looking and feeling confident when delivering your lecture.

For those who are interested, these are my notes on the lecture I gave today. The Elements of Integration and Lebesgue Measure by Robert Bartle is the reference for specific theorems, corollaries, and lemmas.

Definition. Let \lambda be a charge on \mathbb{X} and let (P,N) be a Hahn decomposition for \lambda. The positive and the negative variations of \lambda are the finite measures \lambda^+, \lambda^- defined for E in \mathbb{X} by \lambda^+=\lambda(E\cap P), \lambda^-(E)=-\lambda(E\cap N). The total variation of \lambda is the measure \left|\lambda\right| defined for E in \mathbb{X} by \left|\lambda\right|(E)=\lambda^+(E)+\lambda^-(E).

Note, by lemma 8.3, \lambda^+ and \lambda^- are well defined and do not depend on the Hahn decomposition.

Theorem 8.5 Jordan Decomposition Theorem. If \lambda is a charge on \mathbb{X} then \lambda(E)=\lambda^+(E)-\lambda^-(E) for all E in \mathbb{X}. Moreover, if \lambda=\mu-\nu where \mu and \nu are finite measures on \mathbb{X} then \lambda^+(E)\leq\mu(E) and \lambda^-(E)\leq\nu(E) for all E in \mathbb{X}.

Proof. We prove the first assertion. Let E\in X and let (P,N) be a Hahn decomposition of X. Then P\cup N=X and P\cap N=\emptyset. We have,

\displaystyle \lambda(E)=\lambda\left((E\cap P)\cup(E\cap N)\right)=\lambda^+(E)-\lambda^-(E).

We prove the second assertion. Since \mu and \nu have nonnegative values,

\displaystyle \lambda^+(E)=\lambda(E\cap P)=\mu(E\cap P)-\nu(E\cap P)\leq\mu(E\cap P)\leq\mu(E).

Similarly, \lambda^-(E)\leq\nu(E). \Box

Theorem 8.6. If f is integrable, E\in\mathbb{X}, and \lambda is defined by \lambda(E)=\int_Ef\,d\mu then

\displaystyle \lambda^+(E)=\int_Ef^+\,d\mu \quad \lambda^-(E)=\int_Ef^-\,d\mu \quad \left|\lambda\right|(E)=\int_E\left|f\right|\,d\mu.

Proof. Let P=\lbrace x\in X:f(x)\geq0\rbrace and N=\lbrace x\in X:f(x)<0\rbrace. Then X=P\cup N and P\cap N=\emptyset. Let E\in\mathbb{X}. Then

\displaystyle \lambda(E\cap P)=\int_{E\cap P}f^+\,d\mu\geq0 \qquad \lambda(E\cap N)=-\int_{E\cap N}f^-\,d\mu\leq0.

Thus, (P,N) is a Hahn decomposition of X with respect to \lambda. Now, by the definition of variation we have that

\displaystyle \lambda^+(E)=\lambda(E\cap P)=\int_Ef^+\,d\mu

\displaystyle \lambda^-(E)=-\lambda(E\cap N)=\int_Ef^-\,d\mu

\displaystyle |\lambda|(E)=\lambda^+(E)+\lambda^-(E)=\int_E|f|\,d\mu.

and the theorem is established. \Box

Definition. A measure \lambda on \mathbb{X} is said to be absolutely continuous with respect to a measure \mu on \mathbb{X} if E\in\mathbb{X} and \mu(E)=0 imply that \lambda(E)=0. In this case we write \lambda\ll\mu. A charge \lambda is absolutely continuous with respect to a charge \mu provided that the total variation \left|\lambda\right| of \lambda is absolutely continuous with respect to \left|\mu\right|.

Example. Let f\in M^+ and \lambda(E)=\int f\chi_E\,d\mu. Recall, \lambda is a measure by corollary 4.9. If \mu(E)=0 for some E\in\mathbb{X} then f\chi_E=0 \mualmost everywhere. Thus,

\displaystyle \lambda(E)=\int f\chi_E\,d\mu=\int 0\,d\mu=0

which shows that \lambda is absolutely continuous with respect to \mu.

Example. Let \lambda be Lebesgue measure and \mu the counting measure on \mathbb{R}. Then \mu(E)=0 if and only if E=\emptyset. Hence, \mu(E)=0 implies that \lambda(E)=\lambda(\emptyset)=0, which shows that \lambda\ll\mu. However, if E=\lbrace x\rbrace, the singleton set, then \lambda(E)=0 but \mu(E)=1. Thus, \mu is not absolutley continuous with respect to \lambda.

Absolute continuity can also be characterized as follows.

Lemma 8.8. Let \lambda and \mu be finite measures on \mathbb{X}. Then \lambda\ll\mu if and only if for every \epsilon>0 there exists a \delta(\epsilon)>0 such that E\in\mathbb{X} and \mu(E)<\delta(\epsilon) imply that \lambda(E)<\epsilon.

Proof. If \lambda(E)<\epsilon for any \epsilon>0 then this is necessary and sufficient for \lambda(E)=0. Conversely, suppose that there exists \epsilon>0 and sets E_n\in\mathbb{X} with \mu(E_n)<\frac{1}{2^n} and \lambda(E_n)\geq\epsilon. Let F_n=\cup_{k=n}^{\infty}E_k so that \mu(F_n)<\frac{1}{2^{n+1}} and \lambda(F_n)\geq\epsilon. Since (F_n) is a decreasing sequence of measurable sets, we have that

\displaystyle \mu\left(\bigcap_{n=1}^{\infty}F_n\right)=\lim_{n\to\infty}\mu(F_n)=0 \qquad \lambda\left(\bigcap_{n=1}^{\infty}F_n\right)=\lim_{n\to\infty}\lambda(F_n)\geq\epsilon.

Hence, \mu(E)=0 does not imply that \lambda(E)=0. \Box

Recall, corollary 4.9 states that if f\in M^+ then \lambda(E)=\int_Ef\,d\mu is a measure. Conversely, when can we express a measure \lambda as an integral with respect to \mu of some function f\in M^+?

Corollary 4.11 showed that a necessary condition for this to hold is that \lambda\ll\mu. It turns out that this condition is also sufficient when \lambda and \mu are \sigma-finite. We state this result as a theorem.

Theorem 8.9 Radon-Nikodym Theorem. Let \lambda amd \mu be \sigma-finite measures on \mathbb{X} with \lambda\ll\mu. Then there exists f\in M^+ such that \lambda(E)=\int_Ef\,d\mu for E\in\mathbb{X}. Moreover, the function f is uniquley determined \mu-almost everywhere.

Let \textbf{x}(u,v) be a regular parametrized surface and let z=u+iv denote the corresponding complex coordinate. Since u=\frac{z+\bar{z}}{2} and v=\frac{-i(z-\bar{z})}{2}, we may write \textbf{x}(z,\bar{z})=\left(x^1(z,\bar{z}),x^2(z,\bar{z}),x^3(z,\bar{z})\right).

We define the complex function \phi as follows,

\phi(z):=\left(\frac{1}{2}\left(\frac{\partial x^1}{\partial u}-i\frac{\partial x^1}{\partial v}\right),\frac{1}{2}\left(\frac{\partial x^2}{\partial u}-i\frac{\partial x^2}{\partial v}\right),\frac{1}{2}\left(\frac{\partial x^3}{\partial u}-i\frac{\partial x^3}{\partial v}\right)\right)

Theorem. Let M be a surface with patch \textbf{x}(u,v) and let \phi=\frac{\partial\textbf{x}}{\partial z}. Then \textbf{x}(u,v) is isothermal if, and only if, (\phi^1)^2+(\phi^2)^2+(\phi^3)^2=0.

Proof. Suppose that \textbf{x}(u,v) is isothermal. Note that (\phi^i)^2=\frac{1}{4}[(x^i_u)^2-(x^i_v)^2-2ix^i_ux^i_v]. Therefore,

= (\phi)^2

= \frac{1}{4}\left[\sum_{i=1}^{3}(x^i_u)^2-\sum_{i=1}^{3}(x^i_v)^2-2i\sum_{i=1}^{3} x^i_ux^i_v\right]

= \frac{1}{4}\left(|\textbf{x}_u|^2-|\textbf{x}_v|^2-2i\langle\textbf{x}_u,\textbf{x}_v\rangle\right)

= \frac{1}{4}\left(E-G-2iF\right)

= 0

Conversely, suppose that (\phi)^2=0. Then \frac{1}{4}\left(E-G-2iF\right)=0 and by properties of complex numbers this equations only holds if E-G=0 and F=0 which shows that \textbf{x}(u,v) is isothermal. \Box

Theorem. Suppose that M is a surface with patch \textbf{x}(u,v). Let \phi=\frac{\partial\textbf{x}}{\partial z} and suppose that (\phi)^2=0 (i.e. \textbf{x} is isothermal). Then M is minimal if, and only if, each \phi^i is analytic.

Proof. Suppose that M is minimal, then \textbf{x}(u,v) is harmonic; that is, \Delta\textbf{x}=0. Thus, \frac{\partial\phi}{\partial\bar{z}}=\frac{\partial}{\partial\bar{z}}\left(\frac{\partial\textbf{x}}{\partial z}\right)=\frac{1}{4}\Delta\textbf{x}=0. Therefore, each \phi^i=\frac{\partial\textbf{x}}{\partial z} is analytic. Conversely, the same calculation shows that if each \phi^i is analytic, then each x^i is harmonic, therefore, M is minimal. \Box

Corollary. x^i(z,\bar{z})=c_i+2\text{Re}\int\phi^i\;dz.

Proof. Since z=u+iv, we may write dz=du+idv. Then

\phi^idz=\frac{1}{2}\left[(x^i_u-ix^i_v)(du+idv)\right]=\frac{1}{2}\left[x^i_udu+x^i_vdv+i(x^i_udv-x^i_vdu)\right]

\bar{\phi}^idz=\frac{1}{2}\left[(x^i_u+ix^i_v)(du-idv)\right]=\frac{1}{2}\left[x^i_udu+x^i_vdv-i(x^i_udv-x^i_vdu)\right].

We then have that dx^i=\frac{\partial x^i}{\partial z}dz+\frac{\partial x^i}{\partial\bar{z}}d\bar{z}=\phi^idz+\bar{\phi}^id\bar{z}=2\text{Re}\phi dz and we can now integrate to get x^i. \Box

The problem of constructing minimal surfaces reduces to finding a tripple of analytic functions \phi=(\phi^1,\phi^2,\phi^3) with (\phi)^2=0. A nice we of constructing such a \phi is to take an analytic function f and a meromorphic function g with fg^2 analytic. Now, if we let f=\phi^1-i\phi^2 and g=\frac{\phi^3}{(\phi^1-i\phi^2)} then we have,

\phi^1=\frac{1}{2}f(1-g^2) \quad \phi^2=\frac{i}{2}f(1+g^2) \quad \phi^3=fg.

Note that f is analytic, g is meromorphic, and fg^2 is analytic since fg^2=-(\phi^1+i\phi^2). Furthermore, it is easily verified that (\phi)^2=0. Therefore, \phi determines a minimal surface.

Theorem. (Weierstrass-Enneper Representation I) If f is analytic on a domain D, g is meromorphic on D, and fg^2 is analytic on D then a minimal surface is defined by \textbf{x}(z,\bar{z})=\left(x^1(z,\bar{z}),x^2(z,\bar{z}),x^3(z,\bar{z})\right), where

\displaystyle x^1(z,\bar{z})=\text{Re}\int f(1-g^2)\;dz

\displaystyle x^2(z,\bar{z})=\text{Re}\int if(1+g^2)\;dz

\displaystyle x^3(z,\bar{z})=\text{Re}2\int fg\;dz.

Suppose that g is analytic and has an inverse g^{-1} in a domain D which is analytic as well. Then we can consider g as a new complex variable \tau=g with d\tau=g'dz. Define F(\tau)=\frac{f}{g'} and obtain F(\tau)d\tau=fdz. Therefore, if we replace g by \tau and fdz by F(\tau)d\tau we get the following.

Theorem. (Weierstrass-Enneper Representation II) For any analytic function F(\tau), a minimal surface is defined by \textbf{x}(z,\bar{z})=\left(x^1(z,\bar{z}),x^2(z,\bar{z}),x^3(z,\bar{z})\right), where

\displaystyle x^1(z,\bar{z})=\text{Re}\int(1-\tau^2)F(\tau)\;d\tau

\displaystyle x^2(z,\bar{z})=\text{Re}\int i(1+g^2)F(\tau)\;d\tau

\displaystyle x^3(z,\bar{z})=\text{Re}2\int\tau F(\tau)\;d\tau.

Note the corresponding \phi=\left(\frac{1}{2}(1-\tau^2)F(\tau),\;\frac{i}{2}(1+\tau^2)F(\tau),\;\tau F(\tau)\right).

This representaion tells us that any analytic function F(\tau) defines a minimal surface.

We can now use the Weierstrass-Enneper representation to produce minimal surfaces. For example, if (f, g) = (1, z) then we obtain a parametrization for Enneper’s surface. In fact, if (f,g) = (1,z^n) then an nth order Enneper’s surface is obtained.

First order Enneper surface

The Weierstrass-Enneper representation leads to infinite families of minimal surfaces and has proved fundamental in relating the study of minimal surfaces to the theory of complex analysis.

I have decided to post all of my notes on minimal surfaces. These notes are essentially a summary of how I spent my summer. Each post will build on the previous and there will be four in total. 1 area functional, 2 harmonic function and isothermal coordinates, 3 Weierstrass-Enneper representation, 4 examples.

Definition. Let \phi(x,y) be a real valued function of two real variables x and y defined on a domain D. The partial differential equation

\displaystyle \Delta\phi:=\phi_{xx}(x,y)+\phi_{yy}(x,y)=0

is known as Laplace’s equation. If \phi, \phi_x, \phi_y, \phi_{xx}, \phi_{x,y}, \phi_{y,x}, and \phi_{yy} are all continuous and if \phi(x,y) satisfies Laplace’s equation then \phi(x,y) is harmonic on D.

An interesting relationship between minimal surfaces and harmonic functions comes about when the surface is parametrized by isothermal coordinates

Definition. A parameterization \textbf{x}(u,v) is called isothermal if E=\langle\textbf{x}_u,\textbf{x}_u\rangle=\langle\textbf{x}_v,\textbf{x}_v\rangle=G and F=\langle\textbf{x}_u,\textbf{x}_v\rangle=0

Theorem. Isothermal coordinates exist on any surface M\subseteq\mathbb{R}^3.

Proof. A Survey of Minimal Surfaces [Osserman]. \Box

When isothermal parameters are used, there is a close relationship between the Laplace operator \Delta\textbf{x}=\textbf{x}_{uu}+\textbf{x}_{vv} and mean curvature. We have the following formulas for an orthogonal coordinate system

\textbf{x}_{uu}=\frac{E_u}{2E}\textbf{x}_u-\frac{E_v}{2G}\textbf{x}_v+lU

\textbf{x}_{uv}=\frac{E_v}{2E}\textbf{x}_u-\frac{G_u}{2G}\textbf{x}_v+mU

\textbf{x}_{vv}=-\frac{G_u}{2E}\textbf{x}_u+\frac{G_v}{2G}\textbf{x}_v+nU.

Theorem. If the patch \textbf{x}(u,v) is isothermal then \Delta\textbf{x}(u,v)=\textbf{x}_{uu}+\textbf{x}_{vv}=(2EH)U.

Proof. Since E=G and F=0, we have

= \textbf{x}_{uu}+\textbf{x}_{vv}

= \left(\frac{E_u}{2E}\textbf{x}_u-\frac{E_v}{2G}\textbf{x}_v+lU\right)+\left(-\frac{G_u}{2E}\textbf{x}_u+\frac{G_v}{2G}\textbf{x}_v+nU\right)

= \frac{E_u}{2E}\textbf{x}_u-\frac{E_v}{2G}\textbf{x}_v+lU-\frac{E_u}{2E}\textbf{x}_u+\frac{E_v}{2E}\textbf{x}_v+nU

= (l+n)U

= 2E\left(\frac{l+n}{2E}\right)U.

By examining the formula for mean curvature when E=G and F=0, we see that

\displaystyle H=\frac{lG-2mF+nE}{2EG-F^2}=\frac{lE+nE}{2E^2}=\frac{E(l+n)}{2E^2}=\frac{l+n}{2E}.

Therefore, \textbf{x}_{uu}+\textbf{x}_{vv}=(2EH)U. \Box

Corollary. A surface M:\textbf{x}(u,v)=\left(x^1(u,v),x^2(u,v),x^3(u,v)\right) with isothermal coordinates is minimal if, and only if, x^1, x^2, and x^3 are harmonic functions.

Proof. If M is minimal then H=0 and, by the previous theorem, \textbf{x}_{uu}+\textbf{x}_{vv}=0. On the other hand, suppose that x^1, x^2, and x^3 are harmonic functions. Then \textbf{x}(u,v) is harmonic so \textbf{x}_{uu}+\textbf{x}_{vv}=0 and, by the previous theorem, (2EH)U=0. Therefore, since U is the unit normal and E=\langle\textbf{x}_u,\textbf{x}_u\rangle\neq 0, then H=0 and M is minimal. \Box