As mentioned in the previous post, here is the algebra qual with my solutions. To be honest, this is one of the easiest quals I’ve seen from them. The scores were quite high; I scored 24 out of 25 and there were a couple of perfect scores. I think my only mistake came on problem 5(a), I gave a pretty hand wavy argument. Oh well…

1. Let G be a group.

(a) Let \phi : G \to G be defined by \phi(g)=g^2 for all g\in G. Prove that \phi is a homomorphism if and only if G is abelian.

Proof. Suppose that \phi is a homomorphism. Let a,b \in G. Then \phi(ab)=\phi(a)\phi(b)=a^2b^2=aabb. Also, \phi(ab)=(ab)^2=abab. Thus, aabb=abab. Multiplying on the left by a^{-1} and on the right by b^{-1} we have that ab=ba.

Conversely, suppose that G is abelian. Let a,b \in G. Then \phi(ab)=(ab)^2=a^2b^2=\phi(a)\phi(b). Where the second equality holds because a and b commute. \Box

(b) If G is abelian and finite, show that \phi is an automorphism if and only if G has odd order.

Proof. Suppose that \phi is an automorphism. Seeking a contradiction assume that G does not have odd order. Then 2 is a prime which divides |G|, so by Cauchy’s theorem, G has an element of order 2. Let a be this element. Then \phi(a)=a^2=e, so a is a nontrivial element in the kernel of \phi. This, of course, is a contradiction because \phi is injective and so e must be the only element in the kernel of \phi.

Conversely, suppose that G has odd order. Since G is abelian, by part (a) we know that \phi is a homomorphism. Moreover, because \phi : G \to G and |G|=|G|, \phi being injective will imply \phi being surjective. Thus, it suffices to show that \phi is injective. We do this by showing that the kernel of \phi is trivial. Suppose that a is in the kernel of \phi. Then \phi(a)=a^2=e. Thus |a| must be 1 or 2. But, the order of a must divide the order of G. Since G has odd order, |a|=1 so a=e. Thus, the kernel of \phi is trivial. \Box

2. Let G be a group and let S=\{xyx^{-1}y^{-1}|x,y\in G\}. Prove: If H is a subgroup of G and S\subseteq H, then H is a normal subgroup of G.

Proof. Let g\in G and h\in H. We are given that H is a subgroup so h^{-1} exists and it suffices to show that H is normal. We have that ghg^{-1}=ghg^{-1}h^{-1}h. But ghg^{-1}h^{-1}\in S\subseteq H so ghg^{-1}h^{-1}=\hat{h} for some \hat{h}\in H. Thus, ghg^{-1}=ghg^{-1}h^{-1}h=\hat{h}h\in H because H is closed. \Box

3. Prove that in an integral domain D every prime element is an irreducible.

Proof. Let p be a prime element of D. Then the ideal generated by p, (p), is a prime ideal. Suppose that p=xy. Then xy\in (p) and so x\in (p) or y\in (p). Thus, x=pu=xyu or y=pv=xyv for some u,v\in D. By cancellation in an integral domain we see that 1=yu or 1=xv. In either case y is a unit or x is a unit which means that p is an irreducible. \Box

4. Find necessary and sufficient conditions on \alpha, \beta, \gamma \in \mathbb{R} such that the matrix

\begin{pmatrix} 1 & \alpha & \beta \\ 0 & 0 & \gamma \\ 0 & 0 & 1 \end{pmatrix}

is diagonalizable over \mathbb{R}.

Solution. Recall, an n\times n matrix is diagonalizable if and only if the sum of the dimensions of the eigenspaces is equal to n. Let

A = \begin{pmatrix} 1 & \alpha & \beta \\ 0 & 0 & \gamma \\ 0 & 0 & 1 \end{pmatrix}

First, we find the eigenvalues of A. Computing the characteristic polynomial of A, we obtain f(t)=-t(t-1)^2. The eigenvalues of A are then \lambda=0 and \lambda=1.

Next we find the corresponding eigenspaces.

For \lambda=0 we solve the system given by (A-0I)\vec{x}=0 and get that x_1=-\alpha x_2 and x_3=0. Thus,

\left\{ \begin{pmatrix} -\alpha \\ 1 \\ 0 \end{pmatrix} \right\}

is a basis for the eigenspace corresponding to \lambda=0.

For \lambda=1 we solve the system given by (A-1I)\vec{x}=0 and get that x_1 is free, x_2=\gamma x_3, and x_3(\alpha\gamma+\beta)=0.

If x_3=0 then x_2=0 and

\left\{ \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \right\}

would be our basis for the eigenspace corresponding to \lambda=1. In this case, A would not be diagonalizable.

If \alpha\gamma+\beta=0 then

\left\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ \gamma \\ 1 \end{pmatrix}\right\}

would be our basis for the eigenspace corresponding to \lambda=1. In this case, A would be diagonalizable.

In conclusion, A is diagonalizable if and only if \alpha\gamma+\beta=0. \Box

5. Let M_n(\mathbb{R}) be the vector space of n\times n matrices with entries in \mathbb{R} and let S and Z denote the set of real n\times n symmetric and skew-symmetric matrices, respectively.

(a) Show that the dimension of S is \frac{1}{2}n(n+1). A brief justification is sufficient.

Proof. Let

A=\begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n}\\ a_{2,1} & a_{2,2} & \cdots & a_{2,n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n,1} & a_{n,2} & \cdots & a_{n,n} \end{pmatrix}

be an element of S. Then

\begin{pmatrix} a_{1,1} & a_{2,1} & \cdots & a_{n,1}\\ a_{1,2} & a_{2,2} & \cdots & a_{n,2}\\ \vdots & \vdots & \ddots & \vdots\\ a_{1,n} & a_{2,n} & \cdots & a_{n,n} \end{pmatrix} = \begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n}\\ a_{2,1} & a_{2,2} & \cdots & a_{2,n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n,1} & a_{n,2} & \cdots & a_{n,n} \end{pmatrix}.

Thus,

a_{1,2}=a_{2,1}, a_{1,3}=a_{3,1}, \ldots, a_{1,n}=a_{n,1}

a_{2,3}=a_{3,2}, a_{2,4}=a_{4,2}, \ldots, a_{2,n}=a_{n,2}

…and so on. By counting, we see that there must be \frac{1}{2}n(n+1) elements in our basis for S. \Box

(b) Let T : M_n(\mathbb{R}) \to M_n(\mathbb{R}) be the linear transformation defined by T(A)=\frac{1}{2}(A+A^t) for all A\in M_n(\mathbb{R}). Prove that \text{Ker}(T)=Z and \text{Im}(T)=S.

Proof. First we show that \text{Ker}(T)=Z. Let A\in\text{Ker}(T). Then T(A)=\frac{1}{2}(A+A^t)=0 and so A^t=-A. Thus, A\in Z. Now, let A\in Z. Then A^t=-A and so T(A)=\frac{1}{2}(A+A^t)=\frac{1}{2}(A-A)=0. Thus, A\in\text{Ker}(T).

Next we show that \text{Im}(T)=Z. Let A\in\text{Im}(T). Then there exists B\in M_n(\mathbb{R}) such that T(B)=\frac{1}{2}(B+B^t)=A. This implies that A=A^t and so A\in S. Now, let A\in S. Then A=A^t and so T(A)=\frac{1}{2}(A+A^t)=\frac{1}{2}(A+A)=A. Thus, A\in\text{Im}(T). \Box

(c) Compute the dimension of Z.

Solution. By the rank nullity theorem we have that

\text{dim}(M_n(\mathbb{R}))=\text{rank}(T)+\text{nullity}(T).

In particular,

n^2=\frac{1}{2}n(n+1)+\text{dim}(Z).

Thus, \text{dim}(Z)=n^2-\frac{1}{2}n(n+1). \Box