Autocorrelation is not a true correlation.It is easiest to understand it by a simple one dimensional example. Imagine you had a coin toss game where the tosses were additive. It if is heads then you add one. If it is tails then you add zero. I recommend doing this on your own in Excel if you know how to generate random numbers.
Each signal is independent and uncorrelated with each other. There is no coin toss correlation, but there will certainly be autocorrelation.
Imagine the sequence of coin tosses as HTHTHTHHTTTH so the time series would look roughly like a highly correlated line as your values would be 112233455556. If you treat those as happening at times 1...12 you will plot them and run a regression on them then you get Y=.4685X+.4545 and your R^2 is over 95%. That is an incredible degree of correlation when in fact every observation was uncorrelated with the prior observation. In fact, no observation could be used to predict any other observation.
The consequences of concern in the textbook is that the standard errors are too small and so tests of significance are distorted so that the t-test will show as significant too often.
Although the events are random, the trend is forced by the additive relationship. So the time trend is predictable, but it is dangerous to attribute too much to the series as it is unlikely you really know what is going on.
The Durbin Watson test or the Breusch–Godfrey test can be used to discover the problem. Common sense warns you that if the present is dependent on the past but consists of independent events, then you will have autocorrelation.