Category: Sci/Tech

Sharing is Caring, but Resharing is Poison (by )

I've noticed a trend that has led me to develop a theory.

It's widely said that social networks start off fun and then decline; I've usually hard this attributed to some combination of (a) all your colleagues, family, and former schoolmates joining or (b) it "becoming mainstream" and a rabble of ignorant masses pouring in.

This implies an inevitability - such environments are fun when they're occupied by an exclusive bunch of early adopters, but if they're fun they'll become more popular, and before long, they'll be full of Ordinary People who Ruin It. Good social networks are, therefore, destined to either to be ruined by going mainstream, or die out because they never take off.

I disagree. The elitism inherent in that viewpoint is a warning sign that it's a convenient and reassuring fiction, for a start; and I have an alternative theory. As you may have guessed from this post's title, I think that the provision of a facility to reshare (retweet, repost) other's content with a simple action is a major contributing factor to making a social network descend into a cesspit of fake news and hate.

Back in the early days of Twitter, most of the tweets were things that people had typed out themselves. Many of them were links to other things, but doing that required manually copying the URL and pasting it into a tweet, and most people added a word or two of commentary when they did so.

But Twitter these days is dominated by retweets. In a quick survey of the current tops of my various Twitter timelines, I saw 7 retweets and 5 original tweets. I see less of what my follows are doing, and more of what my follows are liking about what others are doing.

As these centralised social networks are advertising companies, this is a desirable state of affairs for them, for at least two reasons:

  1. Single-click resharing means that content can spread virally across the platform, getting seen by millions of people in a very short timeframe. This is attractive to advertisers, so the network can make money selling tools to help them encourage this, to track the spread of content, and to generally spread the idea that their network is a place where things spread quickly and influence culture.
  2. A big part of their business model is to better profile their users, so they can sell targeted advertising. It's harder for a computer to analyse your prose to learn about you (bearing in mind you might use complicated linguistic tricks such as irony) than to just see if you click a button in response to something or not. The algorithm might not be entirely clear on the meaning of the content you've just reshared, but it now knows that you have something in common with the four million other people who also reshared it; and cross-referencing that with other information it holds about you and them is a powerful predictive tool.

But that same ability for things to rapidly spread is the driving force behind:

  1. The rapid spread of fake news; tools designed to help advertisers are easily adopted with people wanting to control our minds for reasons even worse than mere financial gain.
  2. Hate storms, when something gets widely shared between a community of people who hate the behaviour implied by the original content; who then all respond angrily to it within the social network and often, due to the amplified feeling of communal hate and the wide reach bringing it to the attention of unhinged and morally dubious people, leading to crimes being committed against the target as "revenge".
  3. A decreased sense of community, due to seeing more and more content from outside your group. Interacting with the social networks becomes more like watching TV than sitting chatting with your friends.

I think the elitist complaint that social networks go wrong when they "go mainstream" and "the normals come and ruin it" is really just a misguided attempt to put the lingering feeling embodied in that last point into words.

Looking back at the original decentralised social networks such as email, Usenet and IRC, they all lacked a single-click "reshare" facility - but some of the criticisms of email and usenet (excess crossposting, forwarded chain emails) both come down to it still being a bit too easy to share things across community boundaries. IRC escaped this.

I think there's no reason a social network can't scale to cover the planet without becoming a cesspit - but I suspect that making forwarding content on too easy is a great way to drag it down the pan.

Mind your Is and Qs: The Art of Frequency-Division Multiplexing (by )

As previously discussed, I've been learning about radio lately. As part of that, I've been diving into things I've always found confusing in the past and trying to properly understand them; when I succeed at this, I'd like to share what I've found, hopefully clarifying things that aren't explained so well elsewhere and helping others in the same situation I was in...

I'm going to start by explaining some pretty basic stuff about waves, which most of yuo will already know, but bear with me - I'm trying to emphasise certain things (phase!) that are often glossed over a bit at that level, and cause confusion later.

Today, we're going to talk about modulation. To be specific, frequency-division multiplexing: the technique of sharing some communication medium between lots of channels by putting them on different frequencies.

This is used to great effect in the radio spectrum, where the shared medium is the electromagnetic field permeating all of space; but it applies just as well to any medium capable of carrying waves between a bunch of transmitters and receivers. The media vary somewhat in how signals at different frequencies propogate, and what background noise exists, and what hardware you need to interface to them, but the principle remains the same. Examples other than radio include:

  • Ripples on a lake
  • Electrical impulses in a coaxial cable connecting multiple stations (old Ethernet, cable TV)
  • Noises in air

The key thing about these media is that if a transmitter emits a wave into them, then that wave (subject to propogation distortions) plus some ever-present background noise will arrive at a receiver. So if we can find a way for everyone to communicate using waves, without messing up each other's communications, we're good. It doesn't matter that impulses in a cable travel along one dimension, ripples on a lake travel in two, and noises (or radio waves) in air travel in three: as we're usually thinking about a wave coming from a transmitter to a receiver, we can just think about the one-dimensional case all the time. That's enough for communication - more dimensions are involved when we use waves to find our position (GPS!), but that's not what we're talking about here.


Waves can be all sorts of shapes: square waves with sharp transitions between two levels, triangular waves that go up smoothly then turn around and go down smoothly, and then turn around again and go back up, between two levels; complex wiggly waves that go all over the place... but, it turns out, all of those wave shapes can be made by adding up a bunch of smoothly curving sinusoidal waves.

Such a wave is "periodic": it's the same pattern, repeated again and again, each repitition identical to the last. Each repitition of the pattern is called a "cycle"; as that word suggests, the origin of sinusoidal waves is in the geometry of circles - we don't need to go into that properly here, but what we do need to know is that each full cycle of the wave corresponds to going around a circle, and as such, we can talk about how far along the cycle we are in terms of the circular angle covered. A full cycle is 360 degrees. Half of it is 180 degrees; since the wave goes from the middle, to a peak, to the middle, to a trough, to the middle again before repeating, 180 degrees is enough to get you from the middle on the wave to another, with a single peak or trough inbetween; or to get you from a peak to the next trough. 90 degrees gets you from where the wave crosses the middle to the next peak or trough, or from a peak or trough to the next middle-crossing.

Those waves can be described entirely by three numbers:

  • Amplitude: How big the waves are. This can be measured from trough to peak, or can be measured in terms of how far the troughs and peaks deviate from the middle - the difference is just a factor of 2, as they're symmetrical.
  • Frequency / Wavelength: The frequency is how many complete cycles of the wave (measured, say, from peak to peak) pass a fixed point (such as a receiver) per second. As all waves in our media travel at the same speed, this means that you can also measure the same thing with the wavelength - how much physical distance, in metres, a complete cycle of the wave takes up. The frequency is in Hertz, which means "per second"; the wavelength is in metres; and multiplying the frequency by the wavelength always gives you a speed (in metres per second), that being the speed of wave propogation. In empty space, radio waves travel at the speed of light (because light is a radio wave, in effect): about 300,000,000 metres per second.
  • Phase: This is a trickier one, and often neglected. Unlike the other two, which are things you could unambiguously measure at any point on the wave and get the same answer, "phase" is relative to an observer. Imagine two waves are coming at you from different sources, with the same amplitude and frequency - but the peaks of wave A arrive slightly before the peaks of wave B. Remembering how each cycle of the wave can be considered as a 360 degree rotation, we might say that wave B is lagging 90 degrees behind A, if B's peak arrives when A is just crossing the middle after a peak.

So, unlike amplitude and frequency/wavelength, phase is always a relative measure between two waves, or perhaps between a wave and itself somewhere else: if a transmitter is emitting a wave and we're receiving it from a thousand metres away, because it takes time for that wave to travel, we will be seeing it phase-shifted compare to what the transmitter is emitting at that point in time. The size of the phase shift depends on the wavelength; if the wavelength is a thousand metres, then an entire cycle fits between us and the transmitter, so we'll get a 360 degree phase shift - but since the every cycle is identical, we won't be able to tell and it will look the same as a 0 degree phase shift. However, if the wavelength is two thousand metres, we'll be exactly half a cycle behind the transmitter, and we'll see a 180 degree phase shift.

(In order to try and innoculate my children against getting confused about phase when they learn about it at school, I have always referred to the situation when they're buttoning their clothes up and find that they've been putting button N into buttonhole N+1 or N-1 as a "phase error".)

The fact that the wave repeats exactly after a cycle is important: it means that phase shifts will always be somewhere between 0 and 360 degrees (for periodic waves, at least); but by measuring it a bit differently, you could also measure phases between -180 and +180 degrees, with negative numbers indicating that the wave is lagging behind the reference, rather than counting that as being nearly 360 degrees towards the next cycle.

Another important thing is that, for two waves of the same frequency, the phase difference between them is the same if the two waves travel along together in the same direction. That makes sense, as they travel at the same speed. But what about waves of different frequency? At some points, the two waves will briefly overlap perfectly, perhaps both peaking at the same time, or perhaps some other arbitrary point in the cycle - at that point, they have a zero phase difference. But even a microsecond later, the higher-frequency wave will be slightly further ahead in its cycle than the lower-frequency wave: the phase difference steadily increases until the higher-frequency wave sneaking ahead in the number of cycles it's covered, until finally, it gets an integer number of cycles ahead of the lower-frequency wave - and the phase difference is back at zero. The phase difference between two waves of different frequency therefore constantly changes, linearly (changing by the same amount every meter travelled), but wrapping around to always be between 0 and 360, or -180 and 180, depending on how you measure it. And the rate of change of phase depends on the frequency difference.

But let's put thinking about phase on the back burner for a moment and talk about the most basic way of sharing a communications medium: Continuous Wave (CW) modulation.

Continous Wave (CW)

In this scheme, a bunch of different transmitters can share a medium by using different frequencies - and choosing to either transmit on that frequency, or not. Receivers can look for waves with the frequency of the transmitter they're interested in, and either see a wave, or not.

We can use that to communication a very simple fact, such as "I am hungry": transmit when hungry, switch off when fed. This is used to good effect by babies in the "noises in air" medium (yes, parents can pick out their own baby in a room of crying babies, by the frequency). It can also be used to communicate arbitrarily complex stuff, by using it to transmit serial data using RS-232 framing; or by using short and long pulses to transmit a code such as Morse.

So how close together can be pack different CW channels? Can we have one transmitter on 1,000 Hz and another on 1,001 Hz? Well, not practically, no. A receiver needs to listen for a mixture of signals coming in, and work out if the frequency it's looking for is in there, to tell if the transmitter is on or off right now. As it happens, the techniques for doing this all boil down to ways of asking "In a given time period, how much total amplitude of wave was received between these two frequencies?"; and the boundaries of the two frequencies are always slightly fuzzy too - a signal just below the bottom frequency will still register a bit, albeit weakly.

If you are doing very fast CW, turning on and off rapdily to send lots of pulses per second because you have a lot to say, you'll need to use very small time periods in your receiver, so you get the start and stop of each pulse accurately enough to tell if it's a short or long pulse, and to avoid multiple pulses going into a single time period.

If you have lots of channels close together, you'll need a very narrow range of frequencies you look between. The width of that range of frequencies is known as the "bandwidth"; us computery people think of bandwidth in bits per second, the capacity of a communications link, but the reason we call that "bandwidth" is because it's fundamentally constrained by the actual width of a frequency band used to encode that binary data stream!

If you do both, then the amount of total amplitude you'll spot in your narrow frequency band and your short time period will be very low when the transmitter is transmitting - and it will get harder and harder to distinguish it from the background noise you receive even when nothing is transmitting.

So: Yes, you can have very close-spaced channels - if the noise level is low enough and your CW pulses are slow enough that you can have a long enough time period in your receiver, to get reliable detection of your pulses. But it's always a tradeoff between pulse speed, how wide your frequency band is, background noise levels, and how often your receiver will be confused by noise and get it wrong.

You might think "Wait a minute, that's silly. If the transmitter emits a sinusoidal wave and that turns up at the receiver, you can simply measure the wavelength and frequency; and if you start a clock ticking at the same frequency you can even detect any sudden changes in phase in the wave. How is that in any way fuzzy or unclear?"; but that doesn't scale to when your receiver is picking up the sum of a load of different waves. If there's two waves of very different frequency then it's easy to tell them apart, but if they're of very similar frequency it's another matter entirely.

Amplitude Modulation (AM)

Sometimes we want to send something more complicated than just an on/off signal. Often, we want to send voices, or pictures - both of which can be encoded into a single-dimensional signal: a quantity that varies with time, such as the voltage encountered on a microphone (pictures get a little more involved, but let's not worry about that right now). Rather than just turning our transmitter on and off, we could vary the amplitude of the signal it sends along a spectrum, and thus communicate a varying signal.

Of course, this only works if the signal we're sending (known as the "baseband signal") has a maximum frequency well beneath that of the frequency we're transmitting at (the "carrier frequency"); the same limits as with turning a CW transmitter on and off quickly apply - your carrier wave needs to complete at least a few cycles for its amplitude to be reliably measured, before it changes.

Because you're changing the amplitude of the carrier to convey the baseband signal, this is known as "amplitude modulation", or AM. You can think of it as multiplying the baseband signal with the carrier signal and transmitting the result.

Of course, this operation is symmetrical - the result of sending a 10kHz sine wave baseband on a 1MHz carrier is the same as sending a 1MHz baseband signal on a 10kHz carrier - but we agreed to only do this when the maximum baseband frequency is well below the carrier frequency, so we always know which way round it goes!

By convention, let's treat our baseband signals as being between -1 and +1; our carrier signal is generated at the power level we want to transmit at, so if the baseband signal is 1 we're just transmitting at full power, and if the baseband signal is 0, we're not transmitting anything.

Indeed, continuous wave is just a special case of AM, where the baseband signal is either a train of rectangular pulses, switching at will between 0 and 1.

Now, we mentioned that we need to place CW frequencies a little way apart, because otherwise a receiver couldn't distinguish them reliably - and the distance apart depended, amongst other things, on how quickly the CW signal turned on and off. This of course applies to general AM signals, too: the rate at which they turn on and off, in the general case, being replaced by the maximum frequency in the baseband signal. The higher the frequencies, the further apart your carrier frequencies need to be before the multiple signals interfere with each other.

But... what does that really look like?

Imagine you have a receiver configured with a very narrow input bandwidth; one intended for receiving slow Morse CW might have a bandwidth of 500Hz or so. What would you pick up if you tuned it to the frequency of an AM transmitter? What if you went a bit above or a bit below?

Clearly, if the transmitter was just transmitting a constant level, that's what you'd pick up - which is easy to think about if it's transmitting zero (you receive nothing) or some positive quantity. Of course, if the receiver doesn't know what the maximum amplitude of the transmitter is, it will have no way of knowing if a signal it receives at any given level is 100% of the transmitter power, or merely 10% of it - so it's kind of hard to say what the level means, unless it's zero. More annoyingly, if the transmitter transmits -1, then what we'll get is the full carrier power but inverted. As that inversion swaps peaks for troughs and leaves the middles the same, this is the same as a 180 degree phase shift; the only way to tell it apart from transmitting +1 is to have observed the signal when it's transmitting something positive, and started a clock ticking at the carrier frequency, so we can notice that we're now receiving peaks when we would normally have been receiving troughs.

It's certainly possible to make this kind of thing work: you have to periodically transmit a reference signal, say +1 for a specified time period, so that receivers can wait for that "synchronisation pulse" and therefore learn the phase and maximum amplitude of the signal, and then compare that against the signal received going forward.

But a more common convention is to avoid negative baseband signals entirely. Squash the baseband input range of -1 to +1 up into 0 to 1, by adding 1 and then dividing by 2. This means that a baseband input of 0 maps to a signal transmitting the carrier at half power; a baseband input of +1 maps to full carrier power; and a baseband input of -1 maps to zero carrier power. That avoids the problems of identifying negative baseband signals, but seemingly still leaves the problem of working out what the actual transmit power is... However, if we're not transmitting constant baseband amplitudes, but are instead transmitting an interesting baseband signal, that wiggles up and down around zero with approximate symmetry, then the average signal power we receive will be half of the peak carrier power. Tada!

But, our narrow-bandwidth CW receiver can't pick that up, because it will be changing too rapidly for it. So what WILL it pick up? I'm afraid we're going to need to break out some maths...

As we mentioned earlier, any wave can be made by adding up a bunch of sinuusoidal waves, with varying amplitudes, frequencies, and phases (relative to what, though, as phase is always relative? Well, don't worry too much abut that for now, we'll get into it when I talk about Fourier transforms in a future post). If we can work out what our receiver will pick up when we transmit a single sinusoidal wave as our baseband signal, we can easily work out what it will receive when we transmit a complex signal - because if our baseband signal is the sum of a load of sinusoidal waves A+B+C+D, and we multiply that by a carrier signal X and transmit X(A+B+C+D), that's the same as XA + XB + XC + X*D: in other words, if we amplitude-modulate the sum of a number of baseband signals, the transmitted signal is just the sum of the transmited signals we'd get if we'd modulated each of the baseband signals separately.

So, let's just think about how a single sine wave gets modulated. Let's do that by introducing the sine function, sin(x), whose value is the instantenous amplitude of a sinusoidal wave (with amplitude 2 from peak to trough, or 1 from middle to peak) as x moves from 0 to T. T is 360 if you're working in degrees; people doing this properly prefer to use a quantity that's two times pi (because they're working in radians), but we'll just call the unit of a full circle T and let you use whatever units you like.

So if we want a sinusoidal wave of amplitude A (from middle to peak), frequency F, and phase (relative to some arbitrary starting point) P, then its signal at time t will be:

A * sin(t*F*T + P)

Now, imagine that's our baseband signal (or, to be precise, one sinusoidal component of it). Imagine we have a carrier signal, with amplitude Ac, frequency Fc, and carrier phase Pc, which at time t will be:

Ac * sin(t*Fc*T + Pc)

If we push the baseband signal up from the -1..+1 range into 0..1, as discussed, and then multiply it by the carrier, our modulated output signal will be:

Ac * sin(t*Fc*T + Pc) * (1 + A * sin(t*F*T + P) / 2)

If you distribute the central multiplication over the brackets on the right, you get:

Ac * sin(t*Fc*T + Pc) + A * Ac * sin(t*Fc*T + Pc) * sin(t*F*T + P) / 2

That has two parts, joined by a +.

The left hand part is just the carrier signal.

The right hand part is more interesting. It's got A*Ac/2 in it: the product of the carrier and baseband amplitudes, divided by two - and it's got this intriguing sin(X)*sin(Y) factor, where X = t*Fc*T + Pc and Y = t*F*T + P. I'll spare you the maths, and tell you now that sin(X)*sin(Y) = sin(X-Y + T/4) / 2 - sin(X+Y + T/4) / 2.

Now, X-Y+T/4 is (t*Fc*T+Pc) - (t*F*T+P) + T/4, which simplifies to t*T(Fc-F) + Pc - P + T/4, and X+Y+T/4 simplies to t*T(Fc+F) + Pc + P + T/4.

Also, we noticed earlier that inverting a sine wave is the same as a 180 (T/2) phase shift, so we can swap that subtraction by an addition, and adding an extra T/2 phase to the second sin. As the first one already has a + T/4 phase shift let's keep it symmetrical (remember that phases wraps around at T) and turn the second one into a - T/4.

Putting it all back together, our modulated signal is:

Ac * sin(t*Fc*T + Pc) + A * Ac * sin(t*T(Fc-F) + Pc - P + T/4) / 4 + A * Ac * sin(t*T(Fc+F) + Pc + P - T/4) / 4`

So we have three sinusoidal signals added together.

  1. The carrier, unchanged: amplitude Ac, frequency Fc, phase Pc.
  2. A signal with amplitude A * Ac / 4, frequency Fc-F, and phase Pc - P + T/4.
  3. A signal with amplitude A * Ac / 4, frequency Fc+F, and phase Pc + P - T/4.

Unless F, the baseband frequency, is very low, our narrow-bandwidth receiver tuned to the carrier frequency Fc will only pick up the first part: the unchanged carrier - it will be seemingly blind to the actual modulation! All the baseband signal ends up on the two other signal components, whose frequencies are above and below the carrier frequency by the baseband frequency. If we tune our receiver up and down around the carrier frequency, we'll pick up these two copies of the baseband signal, phase shifted and with quartered amplitude.

These two copies of the baseband signals are known as "sidebands". The first one, with frequency equal to the carrier frequency minus the baseband frequency, is the "lower sideband"; the other, with frequency equal to the carrier frequency plus the baseband frequency, is the "upper sideband".

You'll note that the phases of the two sideband signals, relative to the carrier phase (so subtract Pc from both) are -P + T/4 and P - T/4. Note that these are the same apart from a factor of -1.

If our baseband signal was a complex mixture of sinusoids, then the modulated signal will be the carrier, plus a "copy" of the baseband signal shifted up in frequency by the carrier frequency, and shifted forward in phase by the carrier phase minus a quarter-cycle; plus a second copy of the baseband signal, shifted up like the first, but then inverted in frequency difference from the carrier, and in phase.

And this tells us how closely we can pack these AM signals - we need a little more than the maximum baseband frequency above and below the carrier frequency, to make space for the two sidebands.

"But wait wait wait, that doesn't make sense," I hear you cry. "Where do these sidebands come from? If I have my transmitter and it has a power knob on it and I turn that power knob up and down, so it emits a sine wave of varying amplitude, there's nothing more complicated going on than a sine wave of varying amplitude. How can you tell me that's actually THREE sine waves?!"

Well, that's a matter of perspective. But if you do the maths and add up three sine waves of equally-spaced frequency with the right phase relationship, you'll get what looks like a single sine wave that varies in amplitude. So when a receiver receives it, it's powerless to "tell the difference". A varying-amplitude sine wave and the sum of those three constant-amplitude sine waves are exactly the same thing.

And that's how amplitude modulation works. When you listen to an AM radio, you're listening to an audio frequency signal (converted from vibrations in the air into an electronic signal by a microphone) that's been modulated onto a radio frequency carrier signal and transmitted as radio waves through space. You can do this without very complicated electronics at all!

Single Sideband (SSB)

If you're trying to pack a lot of channels in close together, however, having to transmit both sidebands, both carrying a copy of the baseband signal, is a bit wasteful. Also, it wastes energy - transmitting a signal takes energy, and our modulated signal consists of the unmodulated carrier plus the two sidebands, each at at most a quarter of the amplitude of the raw carrier (remember that A is at most 1) - at least two thirds of the energy is in that unchanging carrier!

If we can generate a signal that looks like an AM signal, but remove the constant carrier and one of the sidebands, we can use a quarter of the energy to get the same modulated signal amplitudes in the surviving sideband (or, for the same energy, get four times the signal amplitude). Therefore, single-sideband is popular for situations where we want efficient use of power and bandwidth, such as voice communications between a large number of power-limited portable stations. But for broadcasting high-quality sounds such as music, we tend to want to use full AM - power isn't such an issue for a big, fixed, transmitting station and we can afford to use twice the bandwidth to get a better signal; as an AM signal has two copies of the baseband signal in the two sidebands, the receiver can combine them to effectively cancel out some of the background noise.

Of course, you need to make sure that the transmitter and the receiver both agree on whether they're using the upper sideband (USB) or the lower one (LSB) - otherwise, they won't hear each other as one will be transmitting signals on the opposite side of the carrier frequency to the side the other's listening! And if the receiver adjusts the carrier frequency they're listening on to try and find the signal, they'll hear it with the frequencies inverted, which won't produce recognizable speech... A single sideband receiver can listen to an AM signal by just picking up the expected sideband, but an AM receiver will not pick up a SSB signal correctly, due to the lack of the constant carrier to use as a reference.

But amplitude modulation (of which SSB is a variant) is, fundamentally, limited by the fact that background noise will always be indistinguishable from the signal in the sidebands; all you can do is to transmit with more power so the noise amplitude is smaller in comparison. However, there is a fundamentally different way of modulating signals that offers a certain level of noise immunity...

Frequency Modulation (FM)

What if we transmit a constant amplitude signal, but vary its phase according to the baseband signal?

If we go back to our carrier:

Ac * sin(t*Fc*T + Pc)

And baseband signal:

A * sin(t*F*T + P)

Rather than having the carrier at some constant phase Pc, let's set Pc to the baseband signal, scaled so that the maximum baseband range of -1..+1 becomes a variation in phase of, say, at most T/4 each side of zero:

Pc = A * sin(t*F*T + P) * T/4

Thus making our modulated signal:

Ac * sin(t*Fc*T + A * sin(t*F*T + P) * T/4)

This is called "phase modulation". But you never hear of "phase modulation", only "frequency modulation". Why's that?

Well, a receiver has a problem with detecting the phase of the signal. Phase is always relative to some other signal; in this case, the transmitter is generating a signal whose phase varies compared to the pure carrier. The receiver, however, is not receiving a pure carrier to compare against. The best it can do is to compare the phase of the signal to what it was a moment ago - in effect, measure the rate of change of the phase. To make that work, the transmitter must change the phase at a rate that depends on the baseband signal, rather than directly with the baseband signal. But how to do that?

You may recall, from when we first talked about phase, that the phase difference between two signals of slightly different frequency changes with time - it goes from 0 to T (or -T/2 to T/2, depending on how you measure it) at a constant rate, and that rate depends on the frequency difference. That means that a frequency difference between two waves is the same thing as the rate of change of phase between the two waves...

This makes sense if you look at our basic wave formula A * sin(t*F*T + P) - if P is changing at a constant rate, then P is some constant X times time t, so we get A * sin(t*F*T + t*X), or A * sin(t*(F*T + X)), or A * sin(t*T*(F + X/T)) - we've just added X/T to the frequency (and T is a constant).

So all the transmitter needs to do is to vary the frequency of the signal it transmits, above and below the carrier frequency, in accordance with the baseband signal; and the receiver can measure the rate of change of phase in the signal it receives to get the baseband signal back.

Therefore, we call it frequency modulation (FM). The neat this is that, because we don't care about the amplitude of the received signal - just its phase - we don't tend to be affected by noise as much because noise is added, so mainly changes the amplitude of the received signal.

So what would we see if we tuned across an FM signal with our narrow-bandwidth receiver? How much bandwidth does an FM channel need, for a given maximum baseband frequency?

Surely, we get to choose that - if we decide that a baseband signal of +1 means we add 1kHz and -1 means we substract 1kHz, then the channel width we need will be 1kHz either side of the carrier frequency, 2kHz total, regardless of the baseband frequency involved? Unless the baseband frequency becomes a sizeable fraction of the carrier frequency, of course; we can't really measure the frequency of the modulated signal if it's varying drastically in phase at timescales approaching the cycle time!

But just as multiplying our carrier by the baseband frequency for AM caused Strange Maths to happen and create sidebands out of nowhere, something similar happens with FM. Now, I could explain the AM case by hand-waving over the trigonometic identities and show how sidebands happened, but the equivelant in FM is beyond my meagre mathematical powers. I'll have to delegate that to Wikipedia.

General Modulation: Mind your Is and Qs

Before we proceed, I must entertain you with an interesting mathematical fact.

Imagine our carrier signal at time t again:

A * sin(t*F*T + P)

Imagine we're using that as a carriar, so F is constant, and we're thinking about modulating its amplitude A or its phase P to communicate something. We're varying two numbers, so it should be no surprise that we can rearrange that into a different form that still has two varying numbers in it. It just so happens that we can write it as the sum of two signals:

I * sin(t*F*T) + Q * sin(t*F*T + T/4)

This is kinda nice, because that's just varying amplitudes of two constant-amplitude constant-phase sinusoidal signals at the same frequency, with a quarter-wave phase difference between them. This form means we don't have P inside the brackets of the sin() any more - and it was varying things inside the brackets of sin() that made the maths of AM and FM so complicated to work out.

But what's the connection between our original variables A and P, and our new ones I and Q? Well, it's quite simple:

I = A * sin(P)

Q = A * sin(P + T/4)

This can be used to build a kind of "universal modulator": given a carrier signal and two inputs, I and Q, output the sum of the carrier times I and the carrier phase-shifted by T/4, times Q.

You can then build an AM, FM, USB or LSB transmitter by working out I and Q appropriately. If your input baseband signal is X:

For AM (varying A, holding P = 0): I = X, Q = 0.

For FM (holding 'A = 1', varying P so that X is the rate of change of P): I = sin(X * t), Q = sin(X * t + T/4).

For LSB, I = X, Q = X phase-shifted by T/4

For USB, I = X, Q = -(X phase-shifted by T/4)

The latter two deserve some explanation! Let's imagine that X, our baseband signal, is a single sine wave, with zero phase offset to keep it simple:

Ax * sin(t*Fx*T)

For LSB, that gives us:

I = Ax * sin(t*Fx*T)


Q = Ax * sin(t*Fx*T + T/4)

Feeding that into the modulation formula to get our modulated signal:

I * sin(t*F*T) + Q * sin(t*F*T + T/4)

gives us:

Ax * sin(t*F*T) * sin(t*Fx*T) + Ax * sin(t*F*T + T/4) * sin(t*Fx*T + T/4)

We've got sin(X)*sin(Y) again, so can use sin(X)*sin(Y) = sin(X-Y + T/4) / 2 - sin(X+Y + T/4) / 2 to expand them out, and cancel out like terms to get:

(Ax / 2) * sin(t*F*T - t*Fx*T + T/4)


(Ax / 2) * sin(t*T*(F - Fx) + T/4)

That's a single sine wave, at frequency F - Fx - just the lower sideband!

Handily, we can get I and Q back from a modulated signal by just multiplying it by the same two carrier signals again. If we take our modulated signal:

I * sin(t*F*T) + Q * sin(t*F*T + T/4)

...and multiply it by sin(t*F*T) we magically get I back; and multiplying it by sin(t*F*T + T/4) magically gets us Q back (if anyone can explain how that works with maths, please do, because I can't figure it out; but I've experimentally verified it...).

We can visualise a modulated carrier in terms of I and Q on a two-dimensional chart. Conventionally, I is the X axis and Q is the Y axis. If there is no signal, I and Q are both zero - we get a dot in the middle of the chart. The amplitude of the received signal is the distance from the center to the dot, and the phase relative to the expected carrier is the angle, anti-clockwise from the positive X axis (extending to the right from the origin). If there is an amplitude-modulated signal, then that dot moves away from the centre by the baseband signal, at some angle depending on the phase difference between the received signal and the reference carrier in the receiver.

What will that angle be? Well, if we lock our reference carrier in the received to the phase of the signal when we first pick it up, then as AM doesn't change the phase, it will remain zero - the dot will just move along the positive I axis. If we don't have any initial phase locking, and just go with whatever arbitrary phase difference exists between our reference oscillator and the received signal (which will depend on when the referenced oscillator was started compare with when the transmitter's oscillator was started, and the phase shift caused by the propogation time of the signal, which depends on your distance - so, pretty arbitrary overall), we will find that our I/Q diagram is just rotated by some arbitrary angle. But that's fine, as with FM, it's the change of phase that matters, not the actual phase.

If we receive an FM signal, then the amplitude will remain constant but the phase will change, meaning that the dot will wiggle back and forth along a curved line - an arc of a circle about the origin.

Noise in the signal will cause the distance from the origin to vary, but it won't cause much variation in the angle unless it gets overpoweringly strong.

The fun thing about I/Q modulation is that it means we can take any two baseband signals and modulate them onto a single carrier, as long as their frequencies are well below the carrier frequency. We can modulate amplitude and frequency at the same time. We could have stereo audio by using I and Q as the left and right channels, respectively.

But, in practice, we tend to use such general I/Q modulation for digital data, rather than putting two analogue baseband signals together!

Digital data modes (QAM)

Say, rather than sending an analogue signal such as voice, you want to send a stream of symbols - such as characters of text.

You can assign each symbol in the alphabet you want to send to a particular I/Q value pair, and feed that into your transmitter.

At the receiver, we'll pick them up, with some arbitrary rotation due to the reference oscillator being at an arbitrary phase and thus being at an arbitrary difference and some arbitrary scaling of the I/Q pairs because we don't know how much the signal has been degraded by distance.

Bbut if we can somehow work out that phase difference and rotate the I/Q diagram to get it the right way around again, and can work out the peak signal strength expected from the transmitter and scale the I/Q values up to their proper range, we can decode the signal by comparing the received I/Q against our list of I/Q values assigned to each symbol in the alphabet, and picking the nearest - noise will shift the signal around a bit, but the nearest one is our best bet.

As usual, if we send symbols faster (so there's less signal at each I/Q value for each symbol) then the influence of noise rises, so we must trade off the rate at which we send symbols (the "baud rate"), how far apart the symbol I/Q points are, and how many symbols we get wrong per second.

How do we get that original phase-lock, though, to rotate the pattern to the right angle? Well, we might make our choice of points (which is known as a "constellation") not rotationally symmetrical, and make sure that we transmit enough different symbols to make the rotation of the constellation obvious at least once every time unit (which might be "a second" or it might be "at the start of every message" or whatever). For instance, make the constellation look like a big arrow pointing along the I axis, and start each message with a few symbols including the tip of the arrow and a couple each from the central line and the two lines in the arrow-head. The receiver can watch the pattern until it becomes clear which phase angle it needs, and then decode merrily on that basis. It will need to keep checking the phase angle and updating it - if there's even the slightest difference between the frequency in the transmitter's oscillator and the frequency in the receiver's oscillator, the I/Q signals at the receiver will slowly rotate as time passes.

Or, you can avoid the need in the same way that FM does - rather than having the constellation defined in terms of points in the I/Q diagram, define it in terms of distance from the origin and rotation angle for each symbol. Each symbol then goes to an I/Q pair that's at the specified distance, but at an angle that depends on the last symbol transmitted, plus or minus that offset. The receiver doesn't need to synchronise at all; it just needs to grab I/Q values and work out the distance from the origin, and the rotation angle between symbols received. If none of the symbols have a phase difference of zero, we get an extra benefit - every symbol involves a change in the modulated signal, even if it's the same symbol again, so we can automatically detect the rate at which symbols are being sent, and not get confused as to how many are being sent when the same symbol is sent repeatedly! You need to send a single symbol before any transmission that is used purely for the receive to use as a phase reference for the NEXT symbol which actually carries some data, of course, but that's a small price to pay.

To find the distance from the I/Q origin of peak transmitter power, we have to either make sure we transmit a symbol using maximum power at suitable intervals so the receiver can update their expectations and scale the I/Q diagram to the right size, or put our entire constellation in a circle so the amplitude is always the same; or, perhaps, have our constellation consist of a series of circles that use different phase differences in each circle - if you have three symbols in the smallest circle, and four in the next circle, and five in the next one up, and those three circles are rotated so that no two points on different circles are on the same phase angle, then for any two received symbols, we will be able to tell what circles they are on just from the phase difference between them - and thus know how much to scale them to place them on those circles. But if we're transmitting a reference symbol at the start of every transmission to establish the phase difference to the first actual message symbol, as suggested in the previous paragraph, we can just send that at a known power level and use that as a reference for both phase AND distance from the origin.

The modulated signal is a mixture of AM and FM, as the transmitted symbol's point in the constellation varies both in distance from the origin (amplitude) and in angle rotated since the last symbol (frequency shift).

Because the I/Q representation of a signal with respect to a carrier is known as "quadrature" (the I and Q stand for "in-phase" and "quadrature", respectively), this combined AM-and-FM is known as "Quadrature Amplitude Modulation", or "QAM". Standard constellations have names such as "256-QAM" (which has 256 different symbols, handy for transmitting bytes of data!).


So there you have it. That's not a complete summary of all the tips and tricks used to jam information into communications media; but it should explain the basics well enough for you to make the best of the Wikipedia pages for things like OFDM and UWB!

Note: In order to try and reduce the cognitive load, I've simplified the maths above somewhat - using sin with a phase difference rather than cos, for instance. It still works to demonstrate the concepts, and produces the same results as the conventional formulae apart from perhaps a changed sign here or there!

I'd like to clarify the explanations of various kinds of waves with diagrams, but I don't have the time to draw any right now! I may be able to come back to it later.

Radio Waves (by )

I really love learning things, and recently I've finally been removing a long-standing thorn in my side - the fact that I don't really understand radio frequency electronics and the propagation of radio waves.

I've tried to fix this a few times in the past, but the resources I'd read never seemed to quite explain the whole picture - and I couldn't see how to piece the things they explained together into one coherent understanding of the electromagnetic world; they were clearly only shedding light on little corners of a totality that still remained mysterious to me.

Well, there are still gaps in my understanding... but I've made some progress, and in the hope that I can help others struggling with the same confusions as I was, I'd like to share my way of understanding it all.

One thing that bothered me was that explanations of transmission-line behaviour seemed to flip between talking about instantaneous voltages and currents at some point in the line, sampling the analogue signal travelling down the line - or talking about an RMS average voltage or current, and thereby causing me to struggle to make sense of what they were saying. But I think I now get transmission lines to some extent (although I'm still hazy on waveguides, because I've not gotten around to looking into them yet). And I was never quite sure what the impedance of a whole transmission line really meant, regardless of its length. If I had a transmission line and put a resistor over the end of it, and hooked it up to a battery and an ammeter, I knew that the current flowing would depend on the total resistance of the line and the resistor at the end - which would depend on the length of the line, as its resistance would be in ohms per meter. So what the heck was this impedance thing about? How did impedance mismatches cause reflections?

So, here's how I think about transmission lines now. The "DC model" of hooking a battery up to one end of a line and reading the current that flows into it is, of course, perfectly true - we can set the circuit up and test it; the reason it doesn't contradict with this weird parallel world of impedances is that the DC model is a steady state model of the system. When you first connect that battery to the line, current is going to start flowing into it, crawling along at a sizeable fraction of the speed of light; but until that current has reached the end, flowed through the terminating resistor, and flowed all the way back, it can't possibly have communicated any information about the total resistance of the line and its terminating resistor... So how much current initially flows from the battery, and why? Of course, the line can be thought of as two series of tiny inductors (with resistors in series, if we assume the inductors are perfect) with tiny capacitors connecting the two conductors, due to the inherent inductance of the wire and the inherent capacitance of the gap between them; you can imagine that the current from the battery has to charge the capacitors through the inductors for the voltage/current surge to propagate down the line. But what made "impedance" really click for me was going back to the basics of Ohm's Law and seeing it as a ratio of voltage to current. At any point along that transmission line, a certain instantaneous current will be flowing - and there will be a certain instantaneous voltage between the two conductors at that point; and the voltage divided by the current is the impedance.

So, if a 12 volt voltage source is connected suddenly to a 50 ohm impedance cable, an instantaneous current of 0.24 amps (12̣̣̣̣÷50) will flow. Now, as Kirchoff's current law tells us, the currents flowing into a node must sum to zero; so with the imaginary node at any point on the transmission line connecting two halves of it, the input current must equal the output current (although some current may be lost to resistive heating or leakage, that's not relevant in this case). So what happens when there's a change in impedance at some point in the transmission line? The current must remain the same, but the impedance changes - so the voltage must change, to make Ohm's Law still hold. If my 50 ohm cable is connected to a 75 ohm cable, that 0.24 amps flows into it and changes into a voltage of 18 volts (0.24×75). Which is why high impedance transmission lines are less lossy; a transmitter putting a watt of power down such a line (with proper impedance matching) will push out a higher voltage with less current that one putting a watt of power into a low impedance line - and resistive losses in the cable are worse for higher currents.

How about the reflections when impedance changes? I'm still a little hazy on this, but I think it's something along the lines of this: imagine a point just where the impedance changes in our example of moving from a 50 ohm cable to a 75 ohm one. A current is flowing into that point, but the voltage is higher after that point than before - which is going to create a current travelling back the other way. Where I'm hazy on is how this happens at junctions where the impedance falls (is it to do with the fact that the current flows alternately backwards and forwards, so the junction is traversed by current in both directions anyway, and the phases where the current travels from low to high impedance are what create the reflections? If so, isn't that a kind of rectifying action, that will create harmonics and intermodulation? But what is the "direction" of a signal travelling along a line, anyway? If we froze the signal in time, we'd just see a sine wave of voltage and a sine wave of current along the transmission line - if we restart time, how does it "know" what direction to propagate in? Something to do with the relative phase of the voltage and current waves?) So, yeah, I've a little more to learn there.

But this model of impedance does explain a lot. I wondered why the angles of the radials of a ground-plane monopole antenna affected impedance, but now it makes sense - the end of the transmission line basically spreads out to become a dipole, or a monopole and its ground plane; the electrical field of the travelling signal has to cross a larger region of space, so it makes sense that the voltage required to do so might vary depending on the amount of space crossed. All the mysterious constants, like the fact that a dipole trimmed to 0.48 times the wavelength has an impedance of 70 ohms are really down to the electromagnetic stretchiness of space: the impedance is the voltage required to push one amp along a transmission line (a dipole antenna just being an oddly-shaped transmission line, handing the signal over to the even weirder transmission line that is free space itself), and that is a function of the permittivity and permeability of that space.

This model also explains how impedance matching transformers work. A 1:2 transformer will transform X volts and Y amps on the "left" into 2X volts and Y/2 amps on the "right"; as the impedance is V/I, that means it converts R ohms on the left into R4 ohms on the right, simply through changing the voltages and currents. A 1:N transformer makes a 1:N^2 change in impedance.

Antennas with multiple elements are confusing, but I'm not sure anybody really understands them - as far as I can tell, the design process is almost always to mock it up in a finite-element computer simulation or build a prototype and tweak the design until the desired parameters are obtained experimentally; the mutual interactions between the elements (not to mention ground, support structures, and the transmission line feeding the antenna) are just too complicated to analyse.

I really don't get why there's a near field and a far field (or that funny one inbetween that, I think, is just a mixture of the two). Does the antenna both far and near fields at once, and the near field is stronger but doesn't spread out far, so the far field is negligible when close to the antenna? Or does the antenna create a near fields, which "decays into" the far field as it spreads out? Nothing I've found seems to explain.

I'm not very clear on why a balanced transmission line that's shorted at one end and open at the other end has varying impedance along its length, and can be used for impedance matching, but it doesn't create reflections from the ends.

But, I can understand how to run a cable to a dipole or monopole antenna, manage the impedance transitions, and make it radiate efficiently. That's progress!

September Events 2018 (by )

Weds 5th - 1-2 pm Women Pioneers in Computer Science at the Museum of Gloucester by Alaric part of our Ada Lovelace Day celebrations - Cuddly Science Histories

Weds 5th - 7 pm Space Album Launch Party at the Guild Hall part of the Gloucester History Festival free but ticketed - Sarah and Jean are part of the album which celebrates local historical places and people

Sat 8th - Pride Day in Gloucester Park - stall selling art work and offering free colouring in - in the Community Tent - Gloucester

Wed 12th - 7:30 pm Book Club Talks - Ada, Ada and Ada - part of our Ada Lovelace Day 10 year special series - Cuddly Science Histories - Cheltenham Bottle of Sauce

Thurs 13th - 6-8 pm - Back To the Future Gloucester PechaKucha - part of our Ada Lovelace Day 10 yr celebrations - Cuddly Science Histories - Eastgate Viewing Camber (the ruins in the ground near Boots) - Gloucester History Festival - Gloucester

Sat 15th - 11:30 am - 3 pm Mighty Girls of the Past - Gloucester Library - kids fun day as part of the Gloucester History Festival including puppets, activities and colouring in. Ada, Aethelflead and Mary Anning are amongst some of the Mighty Girls of the past coming to join us - includes the ever popular sandpit dig (yes inside the library!) - Free and open to all ages and abilities - Cuddly Science

Later in the month there should be some poetry events but just waiting for confirmation 🙂

Rome-Christanity and the Ending of Worlds (by )

When confronted with a graph on facebook showing the "dark ages" and the ensuing arguments over weather Christianity was the savour/cause of it... I write this:

The two events were entwined - the fall of the Empire was also plagued by natural and human fed disaster which led to desperation which fuelled the new religion which in some cases caused its own disasters but also monotheism in general was on the rise - if we'd have ended up with an Abrahamic religion which ever way we turned - the world was ending and they are doomsday/death cults on the most basic of levels. The loss of information and learning oscillated between the Christians and the Vikings with both also picking up the slack at moving education forward at various points of history as well as being the book burning racists at others ie one good period in England for education was due to Christian Missionaries from Africa (before the Norman conquest). History is a many threaded rug.

This maybe why people get grumpy with me on Facebook. Obviously this is a very very simplified statement and that in general is the issue with history - everyone wants and was taught simple narratives which not only do not paint the full picture but are often twisted to agendas and that's before you look at how biased the original sources were anyway - remember history was written by the battle winners and as writing was often the preserve of priests of one variety or another they are tinged with that element plus of cause the story telling needs and until relatively recently history was seen as something that was given to you by divine inspiration - a little factoid is missing or inconvenient? Pray and get a juicer more interesting thing to put in your script. Many of the older religions ie Judaism have mechanisms to try and prevent these copying errors/mutations but even then you are looking at scripts that spent many generations as oral transitions before they were ever confirmed in script on a page.

So yeah the Roman Empire's falling - but you know it was gradual - it became too big for it's communication network - it tried having multiple leaders which started well until Constantine decided to murder the other Empire and his son - the kid was his nephew - he let his sister live. And yes he was the first Christian Emperor due to a miracle he saw on route to battle (probably a meteor breaking up before it impacted the ground - if there was anything at all - but equally it could have been something else - maybe even the divine). But it was a political move also, a lot of the wealthy in the Empire where playing with the new religion - it was hip and trendy - it had eeking out past it's oppression (just) - it was the religion of the town - pagan comes from the latin for country side as the pantheon of gods got pushed out of the cities and was considered to only be for the unenlightened.

I think it was his mother that was obsessed with the new religion and traveled to the holy lands to find the roads that Jesus trod - she in many respects is the first archaeologists we have on record and her somewhat mythical landscape is still imprinted on the area with many pilgrims still following all that she found and was told - though this was hundreds of years after Christ had actually walked those roads.

But this was only one time in the break up of the Empire - another saw the reclaiming of the Mediterranean sea as an Imperial lake and a rejuvenation of trade and art... only to be struck down by the first virulent plague out break - the Emperor survived - his wife didn't and he was fatigued and in constant pain afterwards - many blamed his wife for being a hoar (she was a dancer when he met her and fell in love).

Yet another ending saw a sop of an Emperor, who fled the city of Rome leaving his sister as a prisoner of the Goth's - yes this was the sacking of Rome by Alaric in 410 AD - there were fires so fierce that they fused gold coin and limestone pavements - it was off course a misunderstanding - climate change and war had left the Goths with no home and little food and they considered themselves to still be part of the Empire that had still been wringing taxes out of anyone and everyone they could reach. So they fled to the capital as refugees hoping for aid - the Emperor panicked and a lot of the damage was done by the Romans themselves.

This is often seen as The End and the Emperor's sister married Alaric's bother (can't remember if it was a half/step or full brother) and it seemed to be a marriage for love and not politics.

But that wasn't The end as there was Holy Roman Empire's and time periods shift and change slowly on human time scales. And things where up and down and up and down.

The end of Roman Britain is the beginning of Anglo-Saxon/Viking Britain - but it wasn't a distinct cut off it was an overlap - and one that was gradual as the Romans pulled out the tribes come and saw no resistance, warred and then settled. The Romans took their time pulling out and didn't even really all go as many of them where intermarried and actually native born by this point.

Because I have an interest in this stuff anyway and because it is relevant to my novel series I have spent my teens and adult life reading and watching and prodding at ruins (well mostly taking photographs) - not full time - not even really part time - I am not an historian or archaeologist though people keep insisting on calling me that at the moment (last night I was referred to as a "proper" medievalist when I went to speak to someone about their talk on medieval humour and art - they were worried that I would pick holes in their talk! O.o ). I have collected a kind of over view - being a geologist I tend to bring things back to the rocks and the earth systems and this is my take on it:

The plagues did a lot of damage - the plagues were caused by over crowded cities with good travel and trade interconnects - a transport networks for the disease vectors to move along. But the plagues could only get a hold of a population if and when they became malnourished as that weakens the immune system - this is the difference between it being an outbreak and it going full blow PLAGUE. Healthy individuals who are cared for have good chances of surviving even the roughest of illnesses. Weak, hungry, over crowded, tired and overworked people with little scope for cleaning, washing or just having contaminated water to begin with - they... will not recover - they will die and it will spread like wild fire.

Weather calamities muck up food production - hungry people war over resources which causes even worse resource problems as it cuts down trade - you get a sickness, starvation, war cycle - this of course results in DEATH - the four horse men are ridding out. Religion sometimes precipitated the disaster and ones that were avoidable or had the chances for some serious damage limitation where exploded into carnage (I think it was the 13th century European plague out break that saw families abandoning the sick because it was seen as a judgement from god thus increasing the death toll drastically - sick people need to be fed during the recovery period or they will die of starvation if not relapse of the illness). OF course religion at other junctures was the balm that allowed to people to care for they're stricken neighbours and to rebuild afterwards.

When proper pandemics hit with no modern medical care (possibly even with) - you have a crash in population numbers - civilisation relies on an intricate series of feedback loops that all rely on everyone doing their part of the system. If you loose a chunk of your population - you have a problem. Even 10% is going to have a huge impact - that is 1 in 10 of the farmers, the teachers, the army. You can't produce as much food, education and knowledge transfer falters and you are in a weaker position to those around you who may be having similar food issues.

So actually my conclusion in looking at history as a whole is that these turning points - the collapsing civilisations and transition appear to be connected to the weather - to climate. Whether it is an increase in drought, damp, stupid long winters that catch you expected or rising sea levels. Some of these seem to be linked to volcanic events and others to human activity ie deforestation by the meso-ammerican cultures occurred around the right time to be a factor in the mini-iceage which is thought to have been a big factor in the "dark ages" of Europe - these things are global but we often only look at the localised focal most relevant to us.

I have asked historians at talks and so on if they think this is plausible - most just look at me slightly confused so this really is just my thoughts on the subjects. I even think the witch trials and things can basically be boiled down to... something disrupted the system and people panicked. That something I think is nearly always factors beyond our control - what then happens during the disastors is very much humanities own invention... war, famine, plague loop-la-lopping around each other in diminishing loops until things have settled and are stable again. The "dark ages" is probably the last BIG one of these but it is not the only one and I don't even think it was the biggest it is just most people don't seem to realise what a wealth of very very old and advanced history there is outside of Europe.

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales