Random processes5/12/2023 The two random variables may or may not be independent, correlated, uncorrelated, etc. These two PDFs may or may not be identical. Given the definition of a random process, it follows that the random variable has some PDF we can denote by and the random variable has PDF. That will be sufficient to ground our discussion in the familiar quantities of mean value (first-order moment), power spectrum, autocorrelation (second-order moment), spectral correlation, and cyclic autocorrelation. (Here we are considering real-valued so the definition of the CDF is straightfoward complex-valued random processes are only a little more tricky.) In this post, we’ll focus on and. Moreover, every collection of random variables has an th-order joint PDF and CDF. For each, is a random variable with some cumulative distribution function (CDF) and probability density function (PDF). We use the notation to denote a random process–typically we use upper-case letters to denote a process, and we’ll continue to use lower-case letters to denote signals. But for us, swimming in the deep waters of CSP, our random processes are indexed by time. ![]() The indexing can be some other independent variable, such as distance. ProbabilityĪ random process is a collection of random variables indexed by time. A more conventional way, and a way that is somewhat easier mathematically even if more obscure physically, is to use a generalization of our previous notion of a probability space. That theory is the fraction-of-time (FOT) probability theory, and I hope to get to that in the SPTK series, or in a CSP post. It is possible to use such a model–a single infinite-duration random signal–to construct an entire probabilistic theory of signals. Here is the th ‘ symbol‘ to be transmitted to the receiver, is a ‘ pulse function‘ to be modulated (multiplied) by the symbol, and is the rate of producing pulses, and therefore the rate of transmitting symbols. We need probability because the signals we use in communication theory and practice are inherently unpredictable–they are random.Ī useful model of a communication signal is an infinite-duration signal that incorporates some parameters or variables that are in some sense unknown to a receiver of the signal: How should we generalize the concept of a signal to include both probability and infinite time? We need infinite time because we want to study our systems’ behavior for arbitrarily long-duration inputs. So we now take the next logical step: going from a random variable to a random function or, as it is more commonly referred to, a random process. But random variables are not functions of time. The concept of a random variable bridges the gap between an abstract probability space of events and sets of measurements or numbers that we can operate on with arithmetic, algebra, calculus, and associated computing devices. In particular, communication signals are well-modeled as random infinite-duration power signals, which are not Fourier transformable. We then realized that many of our most important signals (in CSP) don’t quite fit into our analysis framework. We added the crucial tool of convolution to our toolkit, and looked at various kinds of linear time-invariant systems, also known as filters. We shifted focus to systems–those entities that act on our signals to achieve some goal–and found that our analysis tools enabled much insight into system behavior if the system was linear and time-invariant. The latter led us to the useful frequency-domain signal representations of the Fourier series and Fourier transform. We then looked at representing arbitrary signals of time in terms of building-block functions such as Walsh functions, impulses, and harmonically related sine waves. We started our signal-processing toolkit journey by looking at signals, including rectangles, triangles, unit-steps, sinc, etc. Jump straight to ‘Significance of Random Processes in CSP’ below. So … this is the first SPTK post that is also a CSP post. The goal is to illustrate those differences with informative graphics and videos to build intuition in the reader about how the cyclostationarity property comes about, and about how the property relates to the more abstract mathematical object of a random process on one hand and to the concrete data-centric signal on the other. This is my perspective on random processes, so although I’ll introduce and use the conventional concepts of stationarity and ergodicity, I’ll end up focusing on the differences between stationary and cyclostationary random processes. ![]() In this Signal Processing ToolKit post, I provide an introduction to the concept and use of random processes (also called stochastic processes). Previous SPTK Post: Examples of Random Variables Next SPTK Post: The Sampling Theorem
0 Comments
Leave a Reply. |