You are currently browsing the category archive for the ‘Math’ category.

As mentioned in the previous post, here is the algebra qual with my solutions. To be honest, this is one of the easiest quals I’ve seen from them. The scores were quite high; I scored 24 out of 25 and there were a couple of perfect scores. I think my only mistake came on problem 5(a), I gave a pretty hand wavy argument. Oh well…

1. Let $G$ be a group.

(a) Let $\phi : G \to G$ be defined by $\phi(g)=g^2$ for all $g\in G$. Prove that $\phi$ is a homomorphism if and only if $G$ is abelian.

Proof. Suppose that $\phi$ is a homomorphism. Let $a,b \in G$. Then $\phi(ab)=\phi(a)\phi(b)=a^2b^2=aabb$. Also, $\phi(ab)=(ab)^2=abab$. Thus, $aabb=abab$. Multiplying on the left by $a^{-1}$ and on the right by $b^{-1}$ we have that $ab=ba$.

Conversely, suppose that $G$ is abelian. Let $a,b \in G$. Then $\phi(ab)=(ab)^2=a^2b^2=\phi(a)\phi(b)$. Where the second equality holds because $a$ and $b$ commute. $\Box$

(b) If $G$ is abelian and finite, show that $\phi$ is an automorphism if and only if $G$ has odd order.

Proof. Suppose that $\phi$ is an automorphism. Seeking a contradiction assume that $G$ does not have odd order. Then $2$ is a prime which divides $|G|$, so by Cauchy’s theorem, $G$ has an element of order $2$. Let $a$ be this element. Then $\phi(a)=a^2=e$, so $a$ is a nontrivial element in the kernel of $\phi$. This, of course, is a contradiction because $\phi$ is injective and so $e$ must be the only element in the kernel of $\phi$.

Conversely, suppose that $G$ has odd order. Since $G$ is abelian, by part (a) we know that $\phi$ is a homomorphism. Moreover, because $\phi : G \to G$ and $|G|=|G|$, $\phi$ being injective will imply $\phi$ being surjective. Thus, it suffices to show that $\phi$ is injective. We do this by showing that the kernel of $\phi$ is trivial. Suppose that $a$ is in the kernel of $\phi$. Then $\phi(a)=a^2=e$. Thus $|a|$ must be $1$ or $2$. But, the order of $a$ must divide the order of $G$. Since $G$ has odd order, $|a|=1$ so $a=e$. Thus, the kernel of $\phi$ is trivial. $\Box$

2. Let $G$ be a group and let $S=\{xyx^{-1}y^{-1}|x,y\in G\}$. Prove: If $H$ is a subgroup of $G$ and $S\subseteq H$, then $H$ is a normal subgroup of $G$.

Proof. Let $g\in G$ and $h\in H$. We are given that $H$ is a subgroup so $h^{-1}$ exists and it suffices to show that $H$ is normal. We have that $ghg^{-1}=ghg^{-1}h^{-1}h$. But $ghg^{-1}h^{-1}\in S\subseteq H$ so $ghg^{-1}h^{-1}=\hat{h}$ for some $\hat{h}\in H$. Thus, $ghg^{-1}=ghg^{-1}h^{-1}h=\hat{h}h\in H$ because $H$ is closed. $\Box$

3. Prove that in an integral domain $D$ every prime element is an irreducible.

Proof. Let $p$ be a prime element of $D$. Then the ideal generated by $p$, $(p)$, is a prime ideal. Suppose that $p=xy$. Then $xy\in (p)$ and so $x\in (p)$ or $y\in (p)$. Thus, $x=pu=xyu$ or $y=pv=xyv$ for some $u,v\in D$. By cancellation in an integral domain we see that $1=yu$ or $1=xv$. In either case $y$ is a unit or $x$ is a unit which means that $p$ is an irreducible. $\Box$

4. Find necessary and sufficient conditions on $\alpha, \beta, \gamma \in \mathbb{R}$ such that the matrix

$\begin{pmatrix} 1 & \alpha & \beta \\ 0 & 0 & \gamma \\ 0 & 0 & 1 \end{pmatrix}$

is diagonalizable over $\mathbb{R}$.

Solution. Recall, an $n\times n$ matrix is diagonalizable if and only if the sum of the dimensions of the eigenspaces is equal to $n$. Let

$A = \begin{pmatrix} 1 & \alpha & \beta \\ 0 & 0 & \gamma \\ 0 & 0 & 1 \end{pmatrix}$

First, we find the eigenvalues of $A$. Computing the characteristic polynomial of $A$, we obtain $f(t)=-t(t-1)^2$. The eigenvalues of $A$ are then $\lambda=0$ and $\lambda=1$.

Next we find the corresponding eigenspaces.

For $\lambda=0$ we solve the system given by $(A-0I)\vec{x}=0$ and get that $x_1=-\alpha x_2$ and $x_3=0$. Thus,

$\left\{ \begin{pmatrix} -\alpha \\ 1 \\ 0 \end{pmatrix} \right\}$

is a basis for the eigenspace corresponding to $\lambda=0$.

For $\lambda=1$ we solve the system given by $(A-1I)\vec{x}=0$ and get that $x_1$ is free, $x_2=\gamma x_3$, and $x_3(\alpha\gamma+\beta)=0$.

If $x_3=0$ then $x_2=0$ and

$\left\{ \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \right\}$

would be our basis for the eigenspace corresponding to $\lambda=1$. In this case, $A$ would not be diagonalizable.

If $\alpha\gamma+\beta=0$ then

$\left\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ \gamma \\ 1 \end{pmatrix}\right\}$

would be our basis for the eigenspace corresponding to $\lambda=1$. In this case, $A$ would be diagonalizable.

In conclusion, $A$ is diagonalizable if and only if $\alpha\gamma+\beta=0$. $\Box$

5. Let $M_n(\mathbb{R})$ be the vector space of $n\times n$ matrices with entries in $\mathbb{R}$ and let $S$ and $Z$ denote the set of real $n\times n$ symmetric and skew-symmetric matrices, respectively.

(a) Show that the dimension of $S$ is $\frac{1}{2}n(n+1)$. A brief justification is sufficient.

Proof. Let

$A=\begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n}\\ a_{2,1} & a_{2,2} & \cdots & a_{2,n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n,1} & a_{n,2} & \cdots & a_{n,n} \end{pmatrix}$

be an element of $S$. Then

$\begin{pmatrix} a_{1,1} & a_{2,1} & \cdots & a_{n,1}\\ a_{1,2} & a_{2,2} & \cdots & a_{n,2}\\ \vdots & \vdots & \ddots & \vdots\\ a_{1,n} & a_{2,n} & \cdots & a_{n,n} \end{pmatrix} = \begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n}\\ a_{2,1} & a_{2,2} & \cdots & a_{2,n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n,1} & a_{n,2} & \cdots & a_{n,n} \end{pmatrix}$.

Thus,

$a_{1,2}=a_{2,1}, a_{1,3}=a_{3,1}, \ldots, a_{1,n}=a_{n,1}$

$a_{2,3}=a_{3,2}, a_{2,4}=a_{4,2}, \ldots, a_{2,n}=a_{n,2}$

…and so on. By counting, we see that there must be $\frac{1}{2}n(n+1)$ elements in our basis for $S$. $\Box$

(b) Let $T : M_n(\mathbb{R}) \to M_n(\mathbb{R})$ be the linear transformation defined by $T(A)=\frac{1}{2}(A+A^t)$ for all $A\in M_n(\mathbb{R})$. Prove that $\text{Ker}(T)=Z$ and $\text{Im}(T)=S$.

Proof. First we show that $\text{Ker}(T)=Z$. Let $A\in\text{Ker}(T)$. Then $T(A)=\frac{1}{2}(A+A^t)=0$ and so $A^t=-A$. Thus, $A\in Z$. Now, let $A\in Z$. Then $A^t=-A$ and so $T(A)=\frac{1}{2}(A+A^t)=\frac{1}{2}(A-A)=0$. Thus, $A\in\text{Ker}(T)$.

Next we show that $\text{Im}(T)=Z$. Let $A\in\text{Im}(T)$. Then there exists $B\in M_n(\mathbb{R})$ such that $T(B)=\frac{1}{2}(B+B^t)=A$. This implies that $A=A^t$ and so $A\in S$. Now, let $A\in S$. Then $A=A^t$ and so $T(A)=\frac{1}{2}(A+A^t)=\frac{1}{2}(A+A)=A$. Thus, $A\in\text{Im}(T)$. $\Box$

(c) Compute the dimension of $Z$.

Solution. By the rank nullity theorem we have that

$\text{dim}(M_n(\mathbb{R}))=\text{rank}(T)+\text{nullity}(T)$.

In particular,

$n^2=\frac{1}{2}n(n+1)+\text{dim}(Z)$.

Thus, $\text{dim}(Z)=n^2-\frac{1}{2}n(n+1)$. $\Box$

Lockhart’s Lament

I thought I posted this a while back, but apparently I didn’t. This is perhaps the single best essay that I have ever read. The link above takes you to an MAA page with an intro by Keith Devlin, a link to the essay is near the bottom of the page.

I had my algebra qual yesterday. These are the problems along with my solutions. Hopefully they are correct

Some preliminary comments. First, I know my solution to 1(a) is incorrect. Second, I didn’t know how to prove problem 3. I literally wrote down garbage so I am expecting zero points for that one.

1. Let $\mathbb{Q}$ denote the additive group of rational numbers.

a) Prove that every finitely generated subgroup of $\mathbb{Q}$ is cyclic.

Proof. Let $H$ be a finitely generated subgroup of $\mathbb{Q}$ and let $\{h_1,\ldots,h_m\}$ be the generators of $H$. Pick $x\in H$. We want to show that $x$ can be generated by a single $h$ in $H$. Then $x=n(h_1+\ldots+h_m)$ for some positive integer $n$. But, $h_1+\ldots+h_m=\tilde{h}\in H$. Thus, $x=n\tilde{h}$. $\Box$

b) Prove that $\mathbb{Q}$ is not finitely generated.

Proof. Seeking a contradiction, suppose that $\mathbb{Q}$ is finitely generated. Note that $\mathbb{Q}$ is a subgroup of itself. Thus, $\mathbb{Q}$ is a finitely generated subgroup of $\mathbb{Q}$ and by part (a), this means that $\mathbb{Q}$ must be cyclic. This of course is a contradiction because $\mathbb{Q}$ is not cyclic. $\Box$

2. List, up to isomorphism, all abelian groups of order $3528=2^3\cdot3^2\cdot7^2$.

Solution. The partitions of $3$ are $3$, $2+1$, and $1+1+1$. The abelian groups associated with $2^3$ are:

• $\mathbb{Z}_8$
• $\mathbb{Z}_4 \times \mathbb{Z}_2$
• $\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2$

The partitions of $2$ are $2$ and $1+1$. The abelian groups associated with $3^2$ are:

• $\mathbb{Z}_9$
• $\mathbb{Z}_3 \times \mathbb{Z}_3$

The abelian groups associated with $7^2$ are:

• $\mathbb{Z}_{49}$
• $\mathbb{Z}_7 \times \mathbb{Z}_7$

By the fundamental theorem of finitely generated abelian groups, all abelian groups of order $3528$ are isomorphic to one of the groups listed below and none of the groups is isomorphic to another.

1. $\mathbb{Z}_8 \times \mathbb{Z}_9 \times \mathbb{Z}_{49}$
2. $\mathbb{Z}_8 \times \mathbb{Z}_9 \times \mathbb{Z}_7 \times \mathbb{Z}_7$
3. $\mathbb{Z}_8 \times \mathbb{Z}_3 \times \mathbb{Z}_3 \times \mathbb{Z}_{49}$
4. $\mathbb{Z}_8 \times \mathbb{Z}_3 \times \mathbb{Z}_3 \times \mathbb{Z}_7 \times \mathbb{Z}_7$
5. $\mathbb{Z}_4 \times \mathbb{Z}_2 \times \mathbb{Z}_9 \times \mathbb{Z}_{49}$
6. $\mathbb{Z}_4 \times \mathbb{Z}_2 \times \mathbb{Z}_9 \times \mathbb{Z}_7 \times \mathbb{Z}_7$
7. $\mathbb{Z}_4 \times \mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_3 \times \mathbb{Z}_{49}$
8. $\mathbb{Z}_4 \times \mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_3 \times \mathbb{Z}_7 \times \mathbb{Z}_7$
9. $\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_9 \times \mathbb{Z}_{49}$
10. $\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_9 \times \mathbb{Z}_7 \times \mathbb{Z}_7$
11. $\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_3 \times \mathbb{Z}_{49}$
12. $\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_3 \times \mathbb{Z}_7 \times \mathbb{Z}_7$ $\Box$

3. Let $V$ be a vector space over a field $F$ and let $T:V\to V$ be a linear operator such that $T^2=T$. Prove that $V=\text{ker}(T)\oplus\text{im}(T)$.

Proof. 😦

4. Let $A$, $B$ be $n\times n$ matrices with entries in a field $F$ and suppose that $AB=BA$. Prove that if $B$ has an eigenspace of dimension one, then $A$ and $B$ share a common eigenvector.

Proof. Since $B$ has an eigenspace of dimension one, let $\lambda$ be an eigenvalue of $B$ and let $v$ be the one eigenvector corresponding to $\lambda$. Then we have that $ABv=BAv$ and $ABv=A\lambda v=\lambda Av$. Thus, $BAv=\lambda Av$ which means that $Av$ is an eigenvector of $B$ with corresponding eigenvalue $\lambda$. But $v$ is the only eigenvector corresponding to $\lambda$ so $Av=v$. This shows that $v$ is an eigenvector of $A$ with corresponding eigenvalue $1$. $\Box$

5. Let $D$ be a principal ideal domain. Prove that every proper nontrivial prime ideal is maximal.

Proof. Let $(p)$ be a proper nontrivial prime ideal of $D$. Let $(a)$ be an ideal of $D$ such that $(p)\subseteq(a)$. Then $p=ab$ for some $b$ in $D$. Thus, $ab\in(p)$. Since $(p)$ is prime, $a\in(p)$ or $b\in(p)$.

Case 1: Suppose that $a\in(p)$. Then $(a)\subseteq(p)$ so $(p)=(a)$.

Case 2: Suppose that $b\in(p)$. Then $b=pu$ for some $u$ in $D$. Thus, $p=apu$ implies that $1=au$ and so $a$ is a unit. Thus, $(a)=D$.

We have shown that $(p)=(a)$ or $(a)=D$ which means that $(p)$ is a maximal ideal of $D$. $\Box$

Each problem is worth 5 points, 15 is a passing score. Best case I think my score will be 2+5+0+5+5 which is 17 and a pass. Realistically, I think my score could be 1+5+0+4+5 which is 15 and a pass. But it could be 1+4+0+4+4 which is a 13 and not a pass. I feel very confidant about problems 2, 4, and 5; those should be five points each. But the qual committee can be very particular about grading. I also feel very confidant about 1(b); I just don’t know how many points that one is worth. All I can do now is wait…

Definition. A vector space $V$ is finite dimensional if it has a basis consisting of a finite number of vectors. The unique number of vectors in each basis for $V$ is called the dimension of $V$ and is denoted by $\text{dim}(V)$.

Definition. Let $V$ and $W$ be vector spaces and let $T:V\to W$ be linear. The null space (or kernel) $N(T)$ of $T$ is the set of all vectors $x\in V$ such that $T(x)=0$; that is, $N(T)=\{x\in V:T(x)=0\}$. The range (or image) $R(T)$ of $T$ is the subset of $W$ consisting of all images, under $T$, of vectors in $V$; that is, $R(T)=\{T(x):x\in V\}$.

Definition. Let $V$ and $W$ be vector spaces and let $T:V\to W$ be linear. The nullity of $T$ is the dimension of $N(T)$ denoted by $\text{nullity}(T)$. The rank of $T$ is the dimension of $R(T)$ denoted by $\text{rank}(T)$.

Rank Nullity Theorem. Let $V$ and $W$ be vector spaces and let $T:V\to W$ be linear. If $V$ is finite dimensional, then $\text{dim}(V)=\text{rank}(T)+\text{nullity}(T)$.

Proof. Suppose that $\{u_1,\ldots,u_m\}$ forms a basis for $N(T)$. We can extend this to form a basis for $V$, $\{u_1,\ldots,u_m,w_1,\ldots,w_n\}$. Since $\text{nullity}(T)=m$ and $\text{dim}(V)=m+n$, it suffices to show that $\text{rank}(T)=n$. I claim that $S=\{Tw_1,\ldots,Tw_n\}$ forms a basis for $R(T)$.

First we show that $S$ spans $R(T)$. Let $v\in V$. Then there exist unique scalars $a_1,\ldots,a_m$ and $b_1,\ldots,b_n$ such that

$v=a_1u_1+\ldots+a_mu_m+b_1w_1+\ldots+b_nw_n$.

Thus,

$Tv=a_1Tu_1+\ldots+a_mTu_m+b_1Tw_1+\ldots+b_nTw_n$.

Since each $u_i$ is in $N(T)$, we have that $Tu_i=0$ and so $Tv=b_1Tw_1+\ldots+b_nTw_n$.  This shows that $S$ spans $R(T)$.

Next we show that $S$ is linearly independent. Suppose that

$c_1Tw_1+\ldots+c_nTw_n=0$

for scalars $c_i$. Since $T$ is linear, we have that

$T(c_1w_1+\ldots+c_nw_n)=0$.

Thus, $c_1w_1+\ldots+c_nw_n$ is in $N(T)$. Since $u_i$ span $N(T)$, there exist scalars $d_i$ such that

$c_1w_1+\ldots+c_nw_n=d_1u_1+\ldots+d_mu_m$.

Which implies that

$c_1w_1+\ldots+c_nw_n-(d_1u_1+\ldots+d_mu_m)=0$.

But, $\{u_1,\ldots,u_m,w_1,\ldots,w_n\}$ is a basis for $V$ which means that all the $c_i$ and $d_i$ must be zero. Thus, $S$ is linearly independent and is indeed a basis for $R(T)$. This proves that $\text{rank}(T)=n$ as desired. $\Box$

While studying for the algebra qual, I stumbled upon the following problem:

Let $G$ be a group of order $2p$ where $p$ is an odd prime. Prove that $G$ contains a nontrivial normal subgroup.

The problem is simple enough, however, it uses many results from group theory. We’ll investigate the problem in detail bellow.

Cosests and Lagrange’s Theorem

Definition. Let $H$ be a subgroup of a group $G$. The subset $aH=\{ah:h\in H\}$ of $G$ is the left coset of $H$ containing $a$. The subset $Ha=\{ha:h\in H\}$ is the right coset of $H$ containing $a$.

Cosets are important because they allow us to define normal subgroups, which in turn allow us to construct factor groups. We will need two important results concerning cosets.

If $H\leq G$, then the set of left cosets of $H$ form a partition of $G$. Indeed, let the relation $\sim_L$ be defined on $G$ by $a\sim_L b$ if and only if $a^{-1}b\in H$. It is easily verified that $\sim_L$ is an equivalence relation on $G$ with equivalence class $aH$. Moreover, $aH=H$ if and only if $a\in H$. This result follows a simple set containment argument along with the definition of left coset. It should be noted that both of these results have an analogue for right cosets.

Lagrange’s Theorem. If $G$ is a finite group and $H$ is a subgroup of $G$, then the order of $H$ divides the order of $G$.

Proof. Let $|G|=m$ and $|H|=n$. I claim that every left coset of $H$ has order $n$. Define the function $\varphi:H\to gH$ by $\varphi(h)=gh$. Then $\varphi$ is surjective by the definition of $gH$ as $\{gH:h\in H\}$. Suppose that $\varphi(h_1)=\varphi(h_2)$ for $h_1$ and $h_2$ in $H$. Then $gh_1=gh_2$ and by the cancellation law of a group, $h_1=h_2$. Thus, $\varphi$ is injective. Since $\varphi$ is a bijection, we conclude that $H$ and $gH$ have the same order and the claim is established. Since the left cosets of $H$ form a partition of $G$, we have that

$G = g_1H \cup g_2H \cup \cdots \cup g_rH$.

Thus,

$|G| = |g_1H| + |g_2H| + \ldots + |g_rH|$.

Since the order of each left coset of $H$ is equal to $n$, we have that

$|G| = m = n + n + \ldots + n$

where $n$ is summed up $r$ times. That is, $m=rn$ which shows that $n$ is a divisor of $m$. $\Box$

In our proof, to establish that every left coseet of $H$ has the same order as $H$ we showed that the function $\varphi(h)=gh$ was a bijection. It turns out that every right coset of $H$ also has the same order as $H$. To establish this, check that $\varphi:H\to Hg$ defined by $\varphi(h)=hg$ is a bijection.

Lagrange’s theorem is perhaps one of the most useful results in group theory since it allows us to actually compute orders of subgroups. The converse of Lagrange’s theorem is not true. The classic counter example is the alternating group $A_4$, which is the group of even permutations on $4$ elements. The order of $A_4$ is $12$ and $6$ is a divisor of $12$, but $A_4$ has no subgroup of order $6$. There are however some partial converses to Lagrange’s theorem. A partial converse which holds for arbitrary finite groups is the following result.

Cauchy’s Theorem. If $G$ is a finite group and $p$ is a prime dividing $|G|$, then $G$ has an element of order $p$.

Cauchy’s theorem will be useful for us in proving our original problem.

Normal Subgroups

Definition. A subgroup $N$ of a group $G$ is normal if $gN=Ng$ for all $g\in G$. If $N$ is a normal subgroup of $G$ then we shall write $N\unlhd G$.

The next theorem is of interest to us since it will be essential for proving our original problem.

Theorem 1. Let $G$ be a group of order $2n$. If $H$ is a subgroup of $G$ and $|H|=n$, then $H\unlhd G$.

Proof. Let $a\in G\setminus H$. Then $aH\neq H$, $Ha\neq H$, and $|aH|=|Ha|=n$. Since $|G|=2n$, we have that $G=H\cup aH$ and $G=H\cup Ha$. The elements of cosets are distinct because they from a partition of $G$, thus, it must be the case that $aH=Ha$. $\Box$

The Original Problem

Let $G$ be a group of order $2p$ where $p$ is an odd prime. Prove that $G$ contains a nontrivial normal subgroup.

Proof. By Cauchy’s theorem, there exists an element $a\in G$ such that $|a|=p$. Then the cyclic subgroup $\langle a \rangle$ is nontrivial and has order $p$. By theorem 1, $\langle a \rangle$ is a normal subgroup of $G$. $\Box$

The proof is quite elegant, but as you have seen, we needed to recall some important results in order achieve such a simple proof.

Exciting news, I’ll be teaching calculus next quarter!

…kind of.

There is a bit of a catch. I’ll be teaching calculus for business and economics. Commonly referred to as business calc, or (as the TA’s like to call it) busy calc. The catch is that there is no trig (how can you do anything fun in calculus without trig?) and there will be a lot of applications to “business problems”. Regardless, it is still calculus and I am excited!

Keith Devlin is a professor at Stanford but probably better known as the NPR “math guy”. I really like his latest piece on Weekend Edition.

The Way You Learned Math Is So Old School : NPR

Remember going through your multiplication tables in elementary school? Or crunching through long division? Yeah, it sucked and that’s because it’s lame and useless, literally useless. Especially in today’s society where everyone uses a calculator or a computer to carry out arithmetic calculations. Computers do arithmetic for us, Devlin says, but making computers do the things we want them to do requires algebraic thinking. Elementary schools are starting to notice this and it’s changing the way arithmetic is taught. The emphasis in teaching mathematics today is on getting people to be sophisticated, algebraic thinkers. This doesn’t mean that we should stop teaching arithmetic in school; arithmetic is the foundation for strong algebraic thinking, it is not however, the end goal.

The next time you slice a bagel you might want to try this! You can slice a bagel in a single cut so that the result is two equal halves linked together. Don’t believe it? George W. Hart, sculptor and mathematician, has instructions you can follow here.

The two halves turn out to be mobius strips. For a better visual check out the video.

Theorem. Let $R$ be a commutative ring with unity. The ideal $A$ of $R$ is a maximal ideal if and only if the quotient ring $R/A$ is a field.

Proof. Suppose that $A$ is a maximal ideal of $R$. Let $b\in R\setminus A$. It suffices to show that $b+A$ has a multiplicative inverse. Consider the set $B=\{br+a : r\in R, a\in A\}$. I claim that $B$ is an ideal of $R$ that properly contains $A$. Indeed, we check the following:

Nonempty. Since $B$ contains the element $b$, we have that $B$ is nonempty.

Subtraction. Let $br_1+a_1, br_2+a_2\in B$. Then we have that

$(br_1+a_1)-(br_2+a_2) = b(r_1-r_2)+(a_1-a_2)$.

Since $R$ is a ring and $A$ is an ideal, $r_1-r_2\in R$ and $a_1-a_2\in A$. Thus, $(br_1+a_1)-(br_2+a_2)\in B$.

Multiplication. Let $x\in R$ and $br+a\in B$. Since $R$ is commutative, it suffices to show that $x(br+a)\in B$. We have that

$x(br+a)=bxr+xa$.

Since $R$ is a ring and $A$ is an ideal, $xr\in R$ and $xa\in A$. Thus, $x(br+a)\in B.$

Properly contains $A$. By definition of $B$, $A\subseteq B$. Moreover, notice that $b\in B$ and by assumption, $b\notin A$. Thus, $A$ is a proper subset of $B.$

Therefore, $B$ is as claimed. Now, since $A$ is maximal, we must have that $B=R$. Thus $1\in B$, say, $1=bc+a'$. Then,

$1+A=(bc+a')+A=bc+A=(b+A)(c+A).$

We conclude that $c+A$ is the multiplicative inverse of $b+A$; hence, $R/A$ is a field.

Conversely, suppose that $R/A$ is a field. Let $B$ be an ideal of $R$ that properly contains $A$ (i.e. $A\subset B\subseteq R$) and let $b\in B\setminus A$. Then $b+A$ is a nonzero element of $R/A$ and so there exists an element $c+A\in R/A$ such that $(b+A)(c+A)=1+A$, the multiplicative identity of $R/A$. Since $b\in B$, $c\in R$, and $B$ is an ideal, we have that $bc\in B$. Moreover, since

$1+A=(b+A)(c+A)=bc+A$,

we have that $1-bc\in A$. Thus, $1=(1-bc)+bc\in B$ and since $B$ contains $1$ it must be that $B=R$. We conclude that $A$ is a maximal ideal of $R$. $\Box$

Theorem. A subgroup of a cyclic group is cyclic.

Proof. Let $G$ be a cyclic group generated by $a$ and let $H\leq G$. If $H=\{e\}$ then $H=\langle e \rangle$ is cyclic. If $H\neq\{e\}$, then $a^n\in H$ for $n\in\mathbb{Z}^+$. Let $m$ be the smallest integer in $\mathbb{Z}^+$ such that $a^m\in H$.

I claim that $c=a^m$ generates $H$; that is, $H=\langle a^m \rangle = \langle c \rangle$. We must show that every $b\in H$ is a power of $c$. Let $b\in H$. Then since $H\leq G$, $b=a^n$ for some positive integer $n$. By the division algorithm for $\mathbb{Z}$, there exist $q, r \in \mathbb{Z}$ such that $n=mq+r$ where $0\leq r < m$. Then $a^n=a^{mq+r}=a^{mq}a^r$ so $a^r=a^na^{-mq}$. We must have that $a^r\in H$ because $H$ is a group and $a^n, a^{-mq} \in H$. Since $m$ is the smallest positive integer such that $a^m\in H$ and $0\leq r < m$, it must be the case that $r=0$. Thus, $n=mq$, $b=a^n=a^{mq}=c^q$, and so $b$ is a power of $c$. $\Box$