Lecture 05 - Signal Space
1 Before We Begin
The goal of the following lecture is to develop a system wherein we can think of signals as vectors. Conceptualizing signals in this way will make future topics much easier to reason about at the cost of some fairly abstract math required to fully define the system. Do not get discouraged if some of the following lecture does not make sense at first - it will become more intuitive as we start to use vectors throughout the course.
2 Finite Collections of Waveforms
Many situations in digital communications can be viewed as either selecting (transmitting) or decoding (receiving) one of a set of waveforms. We denote this set of waveforms
\[ \{ s_0 (t), \; s_1 (t), \; \ldots,\; s_{M-1} (t) \} \]
These \(M\) waveforms correspond to messages of \(\lceil \log_2 M \rceil\) bits.
For \(M=4\), we can map messages of 2 bits to a single waveform. For instance,
\(M\) | Bits |
---|---|
0 | 00 |
1 | 01 |
2 | 10 |
3 | 11 |
In practice each of the \(M\) waveforms would correspond to a different signal. Here we do not give an example of what those waveforms could be, we show only how 4 waveforms could be mapped to bit sequences.
In the following lecture we will develop a mathematical framework for thinking about the structure of such collections of waveforms, and a framework for generating and processing them.
We term this mathematical framework signal space.
3 Inner Product Spaces in the Abstract
Signal space is a term in communications engineers use to refer to what mathmeticians call inner product space. Signal space is the application of inner product spaces to communications engineering. To understand signal spaces, we must begin by studying the abstract inner product space.
An inner product space starts with a vector space. A vector space \(\mathbb{V}\) is a set of vectors \(v \in \mathbb{V}\) which is closed under both vector addition and scalar multiplication, i.e.,
- if \(\vec{u}, \vec{v} \in \mathbb{V}\), then \(\vec{u} + \vec{v} \in \mathbb{V}\)
- if \(\vec{v} \in \mathbb{V}\), \(\alpha \in \mathbb{R}, \mathbb{C}\), then \(\alpha \cdot \vec{v} \in \mathbb{V}\)
Perhaps the most recognizable vector space is \(\mathbb{R}^2\). We know this is a vector space because it can be shown that \(\mathbb{R}^2\) is closed under vector addition and multiplication. Although we will not rigorously prove these properties, we can show an example to build intuition.
Consider the vectors \(a = \langle 1 , 1 \rangle\) and \(b = \langle 2 , 2 \rangle\). Adding the two vectors yields \(\langle 3 , 3 \rangle\) which is clearly also member of \(\mathbb{R}^2\). Because the addition of two vectors from \(\mathbb{R}^2\) is also in \(\mathbb{R}^2\), we can say that \(\mathbb{R}^2\) is closed under vector addition.
Consider again the vector \(a\). Let \(\c = 5\) (5 is a scalar) and note that \(c \in \mathbb{R}\). Notice that \(c \cdot a = \langle 5 , 5 \rangle\) which is clearly a member of \(\mathbb{R}^2\). Thus, we can say that \(\mathbb{R}^2\) is closed under scalar multiplication, so \(\mathbb{R}^2\) must be a vector space!
An inner product space is a vector space with another operation defined on it: the inner product. The inner product operation is denoted by \(\langle \vec{u}, \vec{v} \rangle\) for \(\vec{u}, \vec{v} \in \mathbb{V}\) and must have all of the following properties:
- Linearity \(\langle \alpha \vec{u} + \beta \vec{v}, \vec{w} \rangle = \alpha \langle \vec{u}, \vec{w} \rangle + \beta \langle \vec{v}, \vec{w} \rangle\)
- Symmetry \(\langle \vec{u}, \vec{v} \rangle = \langle \vec{v}, \vec{u} \rangle^*\)
- Nullity \(\langle \vec{v}, \vec{v} \rangle = 0 \Leftrightarrow \vec{v} = 0\)
We can revisit \(\mathbb{R}^2\) because it is an inner product space. The inner product on \(\mathbb{R}^2\) is defined as
\[ \langle \vec{x}, \vec{y} \rangle = \vec{y}^T \vec{x} \]
It can be shown in general that this operation satisfies the properties of an inner product space, but here we will only show examples demonstrating these properties.
Linearity
Let \(\alpha = \beta = 2\) and \(\vec{u} = \langle 1 , 1 \rangle\), \(\vec{v} = \langle 2 , 2 \rangle\), \(\vec{w} = \langle 3 , 3 \rangle\). Then
\[ \begin{aligned} \langle \alpha \vec{u} + \beta \vec{v} , \vec{w} \rangle &= \langle \langle 2 , 2 \rangle + \langle 4 , 4 \rangle, \langle 3 , 3 \rangle \rangle \\ &= \langle \langle 6 , 6 \rangle, \langle 3 , 3 \rangle \rangle \\ &= 2 \langle \langle 1 , 1 \rangle, \langle 3 , 3 \rangle \rangle + 2 \langle \langle 2, 2 \rangle, \langle 3, 3 \rangle \rangle \\ &= \alpha \langle \vec{u}, \vec{w} \rangle + \beta \langle \vec{v}, \vec{w} \rangle \end{aligned} \]
Symmetry
Using the same \(\vec{u}\) and \(\vec{v}\) from above, we find
\[ \begin{aligned} \langle \vec{u} , \vec{v} \rangle &= \langle 2 , 2 \rangle ^T \langle 1 , 1 \rangle \\ &= 4 \\ \langle \vec{v} , \vec{u} \rangle^* &= \langle 1 , 1 \rangle ^T \langle 2 , 2 \rangle \\ &= 4^* \\ &= 4 \end{aligned} \]
Nullity
\[ \begin{aligned} \langle \langle 0 , 0 \rangle \langle 0 , 0 \rangle \rangle &= \langle 0 , 0 \rangle ^T \langle 0 , 0 \rangle \\ &= 0 \end{aligned} \]
The above properties can be demonstrated on \(\mathbb{R}^n\) for any \(n\), and the inner product on such spaces is known as the dot product.
Remember that the purpose of discussing inner product spaces is to present an abstract mathematical framework that we can map to waveforms so that we have a solid mathematical framework for our analysis of signals. To flesh out this framework, we must examine a few other concepts in inner product spaces which will be useful when we work with signals. Many of them should be very familiar from previous linear algebra courses.
3.1 Norm and Angle
In an inner product space, the norm of a vector is given by the square root of its inner product with itself: \(\|\vec{v}\| = \sqrt{\langle \vec{v}, \vec{v} \rangle}\). Intuitively, the norm of a vector is its length.
The inner product can be related to an angle between two vectors \(\vec{v_1}\) and \(\vec{v_2}\) by \(\cos \theta = \frac{\langle \vec{v_1}, \vec{v_2} \rangle}{\| \vec{v_1} \| \cdot \| \vec{v_2} \|}\)
This is useful because it allows us to gain some intuition into the relationship between vectors by using pictures.
Two vectors are orthogonal if \(\langle \vec{v_1}, \vec{v_2} \rangle = 0\), i.e., \(\theta = \pm \frac{\pi}{2}\) \((\pm 90^{\circ})\), and the familiar Pythogrean Theorem still holds:
If \(\vec{v_1}, \vec{v_2}\) are orthogonal, then \(\| \vec{v_1} + \vec{v_2} \|^2 = \| \vec{v_1} \|^2 + \| \vec{v_2} \|^2\)
3.2 Orthonormal Basis
For any inner product space we can define an orthonormal basis. An orthonormal basis for an inner product space \(\mathbb{V}\) is a collection of basis vectors \(\{ \vec{\varphi}_1, \; \vec{\varphi}_2, \; \ldots, \; \vec{\varphi}_k \}\) such that:
- \(\langle \vec{\varphi}_i, \vec{\varphi}_j \rangle = \begin{cases} 1 & \text{if} \; i = j \; \; \text{(unit length)} \\ 0 & \text{if} \; i \neq j \; \; \text{(orthogonal)} \end{cases}\)
When \(i=j\), the inner product (and therefore length, see Section 3.1) of the vectors is \(1\). Vectors which have a length of \(1\) are called normal. When \(i \neq j\), the inner product (and therefore cosine of the angle between them, see Section 3.1) of the two vectors is \(0\). Vectors with an angle of \(90^\circ\) between them are called orthogonal. This is why the set of vectors is called orthonormal.
- Any vector in \(\mathbb{V}\) can be written uniquely as a linear combination of the basis vectors \[ \vec{v} = \sum_{i=1}^k \alpha_i \varphi_i \; \; \; \text{for some} \; \alpha_i \in \mathbb{R} \ (\text{or} \; \mathbb{C}) \]
This property is what makes the set of vectors a basis.
It is important to highlight 2 important equations which can be derived from the above:
- Analysis: \(\alpha_i = \langle \vec{v}, \vec{\varphi}_i \rangle\), \(i = 1, 2, \ldots, K\)
- Synthesis: \(\vec{v} = \sum_{i=1}^K \alpha_i \, \varphi_i\) for some \(\alpha_i \in \mathbb{R}\) (or \(\mathbb{C}\))
We use the analysis equation to decompose a vector into the coefficients required to represent it as a linear combination of basis vectors. The synthesis equation does the reverse: it generates a single vector from its basis vectors and coefficients.
We know that any vector can be represented as a linear combination of basis vectors.
\[ \vec{v} = \sum_{i=1}^K \alpha_i \varphi_i \]
The inner product of this vector with any other basis vector can be written
\[ \langle \vec{v}, \varphi_j \rangle = \langle \sum_{i=1}^K \alpha_i \varphi_i, \varphi_j \rangle \]
Because the inner product has the property of linearity, we can reduce this expression
\[ \begin{aligned} \langle \vec{v}, \varphi_j \rangle &= \langle \sum_{i=1}^K \alpha_i \varphi_i, \varphi_j \rangle \\ &= \sum_{i=1}^K \alpha_i \langle \varphi_i, \varphi_j \rangle \end{aligned} \]
We know that \(\langle \varphi_i, \varphi_j \rangle = 0\) everywhere except where \(i = j\) where it is \(1\). Thus, we can reduce the expression to
\[ \langle \vec{v}, \varphi_j \rangle = \alpha_j \]
Clearly, we can see that by simply taking the inner product of a vector with one of the basis vectors we have found the coefficient on that basis vector.
Note that given any set of vectors \(\{ \vec{v_1}, \; \vec{v_2}, \; \ldots, \; \vec{v_M} \}\), we can generate a set of orthonormal basis vectors with the same span via the Gram-Schmidt procedure.
Finally, note that the dimension of an inner product space is the size of the orthonormal basis (i.e., the number of basis vectors).
4 Signals as Vectors in an Inner Product Space
Why is any of the above math relevant? Signals are vectors in an inner product space!
We can define the correlation of two signals as an inner product:
\[ \langle x(t), y(t) \rangle = \int_{-\infty}^\infty x(t) y^*(t) dt \]
on the set of all finite-energy waveforms. We can define correlation this way because it satisfies the properties required for inner products (see Section 3). Because the set of waveforms \(\{ s_0 (t), \; s_1 (t), \; \ldots,\; s_{M-1} (t) \}\) used for communication are invariably finite energy, they are in this space and thus this definition of correlation applies.
In order to take advantage of this new mathematical framework, we have to have some way to represent signals as vectors. The solution is to find an orthonormal basis for our set of signals. As described in Section 3.2, with an orthonormal basis we can represent any signal as a linear combination of those basis vectors.
Let \(\{ \vec{\varphi}_1 (t), \; \vec{\varphi}_2 (t), \; \ldots, \; \vec{\varphi}_K (t) \}\) be an orthonormal basis for the subspace spanned by \(\{ s_0 (t), \; s_1 (t), \; \ldots,\; s_{M-1} (t) \}\). Note that \(K\) is finite and often much smaller than \(M\).
Then
\[ s_m (t) = \sum_{k=1}^K \alpha_{mk} \varphi_k (t) \]
i.e., the signal \(s_m (t)\) can be represented as a linear combination of basis vectors. We can also write this relationship as a matrix multiplication:
\[ s_m (t) = \left[ \begin{array}{cc} \alpha_{m1} & \alpha_{m2} & \ldots & \alpha_{mk} \\ \end{array} \right] \left[ \begin{array}{cc} \varphi_{1} \\ \varphi_{2} \\ \vdots \\ \varphi_{k} \\ \end{array} \right] \]
We can find each coefficient by performing the inner product of the signal with the corresponding basis vector:
\[ \alpha_{mk} = \langle s_m (t), \; \varphi_k (t) \rangle \]
Note: this is sometimes called the projection of \(s_m (t)\) onto \(\varphi_k (t)\).
Suppose we have some other signal \(s_{m'} (t)\) and we would like to find the correlation between it and \(s_m (t)\). Normally, this operation would require an integral. We can instead represent each signal as a vector in signal space (as we saw above) and take the inner product of the two vectors:
\[ \begin{aligned} \langle s_m (t), s_{m'} (t) \rangle &= \left\langle \sum_{i=1}^K \alpha_{mi} \, \varphi_i (t), \sum_{j=1}^K \alpha_{m'j} \, \varphi_{j} (t) \right\rangle \\ &= \sum_{i=1}^K \sum_{j=1}^K \alpha_{mi} \, \alpha_{m'j}^* \langle \varphi_i (t), \varphi_{j} (t) \rangle \end{aligned} \]
Because basis vectors must be orthogonal by definition, we know that \(\langle \varphi_i (t), \varphi_{j} (t) \rangle\) is \(0\) when \(i = j\). This means that the second summation is effectively useless, and we can remove it:
\[ = \sum_{i=1}^K \alpha_{mi} \, \alpha_{m'i}^* \]
If we consider the summation over coefficients as a vector operation, we can rewrite the above as
\[ = \vec{s}_{m'}^{\, \dagger} \, \vec{s}_{m} \]
which should look familiar: it is a dot product in Euclidean space!
Using concepts of inner product spaces, we were able to use a simple dot product to compute what normally would require an integral. Leveraging the properties of vectors makes it much more manageable to work with complex waveforms.
5 Connections Between Two Inner Product Spaces
We can map between two inner product spaces using the Analysis and Synthesis equations. We call \(\mathbb{C}^K\) signal space. (very often \(K=2\)) A signal space has a few notable components:
- \(\{ \vec{s_0}, \; \vec{s_1}, \; \ldots, \; \vec{s_{M-1}} \}\) collectively called a constellation.
- \(\vec{s}_m\) called a symbol, \(m=0, 1, \ldots, M-1\)
We can either design a set of basis signals \(\{ s_0 (t), \; s_1 (t), \; \ldots,\; s_{M-1} (t) \}\) or choose both a constellation \(\{ \vec{\alpha_0}, \; \vec{\alpha_1}, \; \ldots, \; \vec{\alpha_{M-1}} \}\) and a basis \(\{ \vec{\varphi_0}, \; \vec{\varphi_1}, \; \ldots, \; \vec{\varphi_{M-1}} \}\) when designing a signal space.
Notice that
\[ s_0 (t) = \left[ \begin{array}{cc} \sqrt E & \sqrt E \\ \end{array} \right] \left[ \begin{array}{c} \varphi_{1} (t) \\ \varphi_{2} (t) \\ \end{array} \right] \] \[ s_1 (t) = \left[ \begin{array}{cc} \sqrt E & \sqrt E \\ \end{array} \right] \left[ \begin{array}{c} \varphi_{1} (t) \\ \varphi_{2} (t) \\ \end{array} \right] \] \[ s_2 (t) = \left[ \begin{array}{cc} -\sqrt E & \sqrt E \\ \end{array} \right] \left[ \begin{array}{c} \varphi_{1} (t) \\ \varphi_{2} (t) \\ \end{array} \right] \] \[ s_3 (t) = \left[ \begin{array}{cc} -\sqrt E & -\sqrt E \\ \end{array} \right] \left[ \begin{array}{c} \varphi_{1} (t) \\ \varphi_{2} (t) \\ \end{array} \right] \]
i.e., we are able to represent distinct signals \(s_i\) as linear combinations of some basis signals \(\varphi_j\). We can see in Figure 1 how we map between time domain signals and their representation in a constellation. Notice the axes on the constellation correspond to the basis vectors.