Wednesday, September 16, 2015

Nerd Food: Neurons for Computer Geeks - Part VI: LIF At Long Last!

Nerd Food: Neurons for Computer Geeks - Part VI: Integrate and Fire!

Welcome to part VI of a multi-part series on modeling neurons. In part V we added a tad more theory to link electricity with neurons, and also tried to give an idea of just how complex neurons are. Looking back on that post, I cannot help but notice I skipped one bit that is rather important to understanding Integrate-and-Fire (IAF) models. So lets look at that first and then return to our trail.

Resting Potential and Action Potential

We have spoken before about the membrane potential and the resting membrane potential, but we did so with such a high degree of hand-waving it now warrants revisiting. When we are talking about the resting membrane potential we mean just that - the value for the membrane potential when nothing much is happening. That is the magical circa -65mv we discussed before - with all of the related explanations on conventions around negative voltages. However, time does not stand still and things happen. The cell receives input from other neurons, and this varies over time. Some kinds of inputs can cause events to trigger on the receiving neuron: active ion channels may get opened or shut, ions move around, concentrations change and so forth, and thus, the cell will change its membrane potential in response. When these changes result in a higher voltage - such as moving to -60mv - we say a depolarisation is taking place. Conversely, when the voltage becomes more negative, we say hyperpolarisation is occurring.

Now, it may just happen that there is a short-lived but "strong" burst of depolarisation, followed by equally rapid hyperpolarisation - and, as a result of which, the Axon's terminal decides to release neurotransmitters into the synapse (well, into the synaptic gap or synaptic cleft to be precise). This is called an action potential, and it is also known by many other names such as "nerve impulses" or "spikes". When you hear that "a neuron has fired" this means that an action potential has just been emitted. If you record the neuron's behaviour over time you will see a spike train - a plot of the voltage over time, clearly showing the spikes. Taking a fairly random example:

Simulation_of_hrose_neuron.png

Figure 1: Source: Wikipedia, Neural oscillation

One way of picturing this is as a kind of "chain-reaction" whereby something triggers the voltage of the neuron to rise, which triggers a number of gates to open, which then trigger the voltage to rise and so on, until some kind of magic voltage threshold is reached where the inverse occurs: the gates that were causing the voltage to rise shut and some other gates that cause the voltage to decrease open, and so on, until we fall back down to the resting membrane potential. The process feeds back on itself, first as a positive feedback and then as a negative feedback. In the case of the picture above, something else triggers us again and again, until we finally come to rest.

This spiking or firing behaviour is what we are trying to model.

Historical Background

As it happens, we are not the first ones to try to do so. A couple of years after Einstein's annus mirabilis, a french chap called Louis Lapicque was also going through his own personal moment of inspiration, the output of which was the seminal Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarisation. It is summarised here in fairly accessible English by Abbot.

Lapicque had the insight of imagining the neuron as an RC circuit, with the membrane potential's behaviour explained as the interplay between capacitor and resistor; the action potential is then the capacitor reaching a threshold followed by a discharge. Even with our faint understanding of the subject matter, one cannot but appreciate Lapique's brilliance to have the ability to reach these conclusions in 1907. Of course, he also had to rely on the work of many others to get there, let's not forget.

This model is still considered a useful model today, even though we know so much more about neurons now - a great example of what we mentioned before in terms of the choices of the level of detail when modeling. Each model is designed for a specific purpose and it should be as simple as possible for the stated end (but no simpler). As Abbot says:

While Lapicque, because of the limited knowledge of his time, had no choice but to model the action potential in a simple manner, the stereotypical character of action potentials allows us, even today, to use the same approximation to avoid computation of the voltage trajectory during an action potential. This allows us to focus both intellectual and computation resources on the issues likely to be most relevant in neural computation, without expending time and energy on modeling a phenomenon, the generation of action potentials, that is already well understood.

The IAF Family

Integrate-and-Fire is actually a family of models - related because all of them follow Lapicque's original insights. Over time, people have addressed shortcomings in the model by adding more parameters and modifying it slightly and from this other models were born.

In general, models in the IAF family are single neuron models with a number of important properties (as per Izhikevich):

  • The spikes are all or none; that is, we either spike or we don't. This is a byproduct of the way spikes are added to the model, as we shall see later. This also means all spikes are identical because the are all created the same way.
  • The threshold for the spike is well defined and there is no ambiguity as to whether the neuron will fire or not.
  • It is possible to add a refractory period, similarly to how we add the spike. The refractory period is a time during which the neuron is less excitable (e.g. ignores inputs) and occurs right after the spike.
  • Positive currents are used as excitatory inputs and negative currents as inhibitory inputs.

But how do the members of this family look like? We will take a few examples from Wikipedia to make a family portrait and then focus on LIF.

IAF: Integrate-and-Fire

This the Lapicque model. It is also called a "perfect" or "non-leaky" neuron. The formula is as follows:

\begin{align} I(t) = C_m \frac{dV_m(t)}{dt} \end{align}

The m's are there to signify membrane, nothing else. Note that its the job of the user to determine θ - that is the point at which the neuron spikes - and then to reset everything to zero and start again. If you are wondering why it's called "integrate", that's because the differential equation must be integrated before we can compare the current value to a threshold and then, if we're passed it, well - fire!. Hence Integrate-and-Fire.

Wikipedia states this in a classier way, of course:

[This formula] is just the time derivative of the law of capacitance, Q = CV. When an input current is applied, the membrane voltage increases with time until it reaches a constant threshold Vth, at which point a delta function spike occurs and the voltage is reset to its resting potential, after which the model continues to run. The firing frequency of the model thus increases linearly without bound as input current increases.

Integrate-and-Fire with Refractory Period

It is possible to extend IAF to take the refractory period into account. This is done by adding a period of time t ref during which the neuron does not fire.

LIF: Leaky Integrate-and-Fire

One of the problems of IAF is that it will "remember" stimulus, regardless of the time that elapses between stimuli. By way of example: if a neuron gets some input below the firing threshold at some time (say ta), then nothing for a long period of time and then subsequent stimulus at say tb, this will cause the neuron to fire (assuming the two inputs together are above the threshold). In the real world, neurons "forget" about below-threshold stimulus after certain amount of time has elapsed. This problem is solved in LIF by adding a leak term to IAF. The Wikipedia's formula is like so:

\begin{align} I_m(t) - \frac{V_m(t)}{R_m} = C_m \frac{dV_m(t)}{dt} \end{align}

We will discuss it in detail later on.

Interlude: Leaky Integrators and Low-Pass Filters

Update: this section got moved here from an earlier post.

Minor detour into the world of "Leaky Integrators". As it turns out, mathematicians even have a name to describe functions like the one above: they are called Leaky Integrators. A leaky integrator is something that takes an input and "integrates" - that is, sums it over a range - but by doing so, starts "leaking" values out. In order words, a regular sum of values over a range should just result in an ever growing output. With a leaky integrator, we add up to a point, but then we start leaking, resetting the value of the sum back to where we started off.

It turns out these kind of functions have great utility. For example, imagine that you have a range of inputs varying from some arbitrary low number to some other arbitrary high-number. When you supply these inputs to a leaky integrator, it can be used to "filter out" the high numbers; input numbers higher than a certain cut-off point just result in zeros in the output. This is known as a low-pass filter. One can conceive of a function that acted in the opposite way - a high-pass filter.

Exponential Integrate-and-Fire

In this model, spike generation is exponential:

\begin{align} \frac{dX}{dt} = \Delta_\tau exp(\frac{X - X_t}{\Delta_\tau}) \end{align}

Wikipedia explains it as follows:

where X is the membrane potential, XT is the membrane potential threshold, and ΔT is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses XT, it diverges to infinity in finite time.

Others

We could continue and look into other IAF models, but you get the point. Each model has limitations, and as people work through those limitations - e.g. try to make the spike trains generated by the model closer to those observed in reality - they make changes to the model and create new members of the IAF family.

Explaining the LIF Formula

Let's look at a slightly more familiar formulation of LIF:

\begin{align} \tau_m \frac{dv}{dt} = -v(t) + RI(t) \end{align}

By now this should make vague sense, but lets do it step by step breakdown just to make sure we are all on the same page. First, we know that the current of the RC circuit is defined like so:

\begin{align} I(t) = I_R + I_C \end{align}

From Ohm's Law we also know that:

\begin{align} I_R = \frac {v}{R} \end{align}

And from the rigmarole of the capacitor we also know that:

\begin{align} I_C = C \frac{dv}{dt} \end{align}

Thus its not much of a leap to say:

\begin{align} I(t) = \frac {v(t)}{R} + C \frac{dv}{dt} \end{align}

Now, if we now multiply both sides by R, we get:

\begin{align} RI(t) = v(t) + RC \frac{dv}{dt} \end{align}

Remember that RC is τ, the RC time constant; in this case, we are dealing with the membrane so hence the m. With that, the rest of the rearranging to the original formula should be fairly obvious.

Also, if you recall, we mentioned Leaky Integrators before. You should hopefully be able to see the resemblance between these and our first formula.

Note that we did not model spikes explicitly with this formula. However, when it comes to implementing it, all that is required is to look for a threshold value for the membrane potential - called the spiking threshold; when that value is reached, we need to reset the membrane potential back to a lower value - the reset potential.

And with that we have enough to start thinking about code…

Method in our Madness

.. Or so you may think. First, a quick detour on discretisation. As it happens, computers are rather fond of discrete things rather than the continuous entities that inhabit the world of calculus. Computers are very much of the same opinion as the priest who said:

And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the Ghosts of departed Quantities?

So we cannot directly represent differential equations in the computer - not even the simpler ordinary differential equations (ODEs), with their single independent variable. Instead, we need to approximate them with a method for numerical integration of the ODE. Remember: when we say integration we just mean "summing".

Once we enter the world of methods and numerical analysis we are much closer to our ancestral home of Software Engineering. The job of numerical analysis is to look for ways in which one can make discrete approximations of the problems in mathematical analysis - like, say, calculus. The little recipes they come up with are called numerical methods. A method is nothing more than an algorithm, a set of steps used iteratively. One such method is the Euler Method: "[a] numerical procedure for solving ordinary differential equations (ODEs) with a given initial value", as Wikipedia tells us, and as it happens that is exactly what we are trying to do.

So how does the Euler method work? Very simply. First you know that:

\begin{align} y(t_0) = y_0 \\ y'(t) = f(t, y(t)) \end{align}

That is, at the beginning of time we have a known value. Then, for all other t's, we use the current value in f in order to be able to compute the next value. Lets imagine that our steps - how much we are moving forwards by - are of a size h. You can then say:

\begin{align} t_{n+1} = t_n + h \\ y_{n+1} = y_n + h * f(x_n, t_n) \end{align}

And that's it. You just need to know where you are right now, by how much you need to scale the function - e.g. the step size - and then apply the function to the current values of x and t.

In code:

template<typename F>
void euler(F f, double y0, double start, double end, double h) {
    double y = y0;
    for (auto t(start); t < end; t += h) {
        y += h * f(t, y, h);
    }
}

We are passing h to the function F because it needs to know about the step size, but other than that it should be a pretty clean mapping from the maths above.

This method is also known as Forward Euler or Explicit Euler.

What next?

And yet again, we run out of time yet again before we can get into serious coding. In the next instalment we shall cover the implementation of the LIF model.

Created: 2015-09-16 Wed 18:05

Emacs 24.5.1 (Org mode 8.2.10)

Validate

Monday, September 07, 2015

Nerd Food: Neurons for Computer Geeks - Part V: Yet More Theory

Nerd Food: Neurons for Computer Geeks - Part V: Yet More Theory

Welcome to part V of a multi-part series on modeling neurons. In part IV we introduced the RC Circuit by making use of the foundations we painstakingly laid in previous posts. In truth, we could now move on to code and start looking at the Leaky Integrate-and-Fire (LIF) model, since we've already covered most required concepts. However, we are going to do just a little bit more theory before we get to that.

The main reason for this detour is that I do not want to give you the impression neurons are easy; if there is one thing that they are not is easy. So we're going to resume our morphological and electrical exploits to try to provide a better account of the complexity inside the neuron, hopefully supplying enough context to appreciate the simplifications done in LIF.

The content of this post is highly inspired from Principles of Computational Modelling in Neuroscience, a book that is a must read introduction if you decide to become serious on this subject. If so, you may also want to check the Gerstner videos: Neural networks and biological modeling.

But you need not worry, casual reader. Our feet are firmly set in layman's land and we'll remain so until the end of the series.

Brief Context on Modeling

Before we get into the subject matter proper, I'd like us to ponder a few "meta-questions" in terms of modeling.

Why Model?

A layperson may think that we model neurons because we want to build a "computer brain": one that is similar to a real brain, with its amazing ability to learn, and one which at some point may even think and be conscious. Hopefully, after you finish this series of posts, you will appreciate the difficulty of the problem and see that it's not very likely we'll be able to make a "realistic" "computer brain" any time soon - for sensible values of "realistic", "computer brain" and "any time soon".

Whilst we have good models that explain part of the behaviour of the neuron and good models for neural networks too, it is not the case that we can put all of these together to form some kind of "unified neuron model", multiply it by 80 billion, add a few quadrillion synapses and away we go: artificial consciousness. Given what we know at the moment, this approach is far too computationally demanding to be feasible. Things would change if there was a massive leap in computational power, of course, but not if they stay at present projections - even with Moore's Law.

So if we are not just trying to build a computer brain, then why bother? Well, if you set your sights a little lower, computational models are actually amazingly useful:

  • one can use code to explore a small portion of the problem domain, making and validating predictions using computer models, and then test those predictions in the lab with real wetware. The iterative process is orders of magnitude faster.
  • computer models are now becoming quite sophisticated, so in some cases they are good representations of biological processes. This tends to be the case for small things such as individual cells or smaller. As computers get faster and faster according to Moore's Law, the power and scope of these models grows too.
  • distributing work with Free and Open Source Software licences means it is much easier for researchers to reproduce each others work, as well as for them to explore avenues not taken by those who did the work originally, speeding things up considerably. Standing on the shoulders of giants and all that.

What Tools Do We Model With?

The focus of these posts is on writing models from scratch, but that's not how most research is conducted. In the real world, people try their best to reuse existing infrastructure - of which there is plenty. For example there is NEURON, PyNN, Brian and much more. Tools and processes have evolved around these ecosystems, and there is a push to try to standardise around the more successful frameworks.

There is also a push to find some kind of standard "language" to describe models so that we can all share information freely without having to learn the particulars of each others representations. The world is not quite there yet, but initiatives such as NeuroML are making inroads in this direction.

However, the purpose of our this series is simplification, so we will swerve around all of this. Perhaps material for another series.

At What Level Should One Model?

A related question to the previous ones - and one that is not normally raised in traditional software engineering, but is very relevant in biology - is the level of detail at which one should model.

Software Engineers tend to believe there is a model for a problem, and once you understand enough about the problem domain you will come up with it and all will be light. Agile and sprints are just a way to converge to it, to the perfection that exists somewhere in the platonic cloud. Eric Evans with DDD started to challenge that assumption somewhat by making us reflect on just what it is that we mean by "model" and "modeling", but, in general, we have such an ingrained belief in this idea that is very hard to shake it off or to even realise the belief is there in the first place. Most of us still think of the code representation of the domain model as the model - rather than accept it is one of a multitude of possible representations, each suitable for a given purpose.

Alas, all of this becomes incredibly obvious when you are faced with a problem like modeling a neuron or a network of neurons. Here, there is just no such thing as the "right model"; only a set of models at a different perspectives, each with a different set of trade-offs, and any of them only make sense in the context of what one is trying to study. It may make sense to model neurons like networks, ignoring the finer details of each one and looking at their behaviour as a group, or it may make sense to model individual bits of the neuron as an entity. What makes it "right" or "wrong" is what it is that we are using the model for and how much computational power one has at one's disposal.

Having said all of that, lets resume our morphology adventures.

Electricity and Neurons

We started off with an overview of the neuron and then moved over to lots and lots of electricity; now it's time to see how those two fit together.

As we explained in part I, there is a electric potential difference between the inside of the cell and the outside, called the membrane potential. The convention to compute this potential is to subtract the potential inside the cell to the potential outside the cell; current is positive when there is a flow of positive charge from the inside to the outside and negative otherwise. Taken into account these definitions, one should be able to make sense of the resting membrane potential: it is around -65mv. But how does this potential change?

Ion Channels

Earlier, we spoke about ions - atoms that either lost or gained electrons and so are positively or negatively charged. We also said that, in general, the cell's membrane is impermeable, but there are tiny gaps in the membrane which allow things in and out of the cell. Now we can expand a bit further. Ion channels are one such gap, and they have that name because they let ions through. There are many kinds of ion channels. One way of naming them is to use the ion they are most permeable to - but of course, this being biology, the ion channels don't necessarily always have a major ion they are permeable to.

Another useful categorisation distinguishes between passive and active ion channels. Active channels are those that change their permeability depending on external factors such as the membrane potential, the concentration of certain ions, and so on. For certain values they are open - i.e. permeable - whereas for other values they are closed, not allowing any ions through. Passive channels are simpler, they just have a fixed permeability behaviour.

There are also ionic pumps. These are called pumps because they take one kind of ion out, exchanging it for another kind. For instance, the sodium-potassium pump pushes potassium into the cell and expels sodium out. A pump has a stoichiometry, which is a fancy word to describe the ratio of ions being pumped in and out.

Complexity Starts To Emerge

As you can imagine, the key to understating electric behaviour is understanding how these pesky ions move around. Very simplistically, ions tend to move for two reasons: because there is a potential difference between the inside and the outside of the cell, or because of the concentration gradient of said ion. The concentration gradient just means that, left to their own devices, concentration becomes uniform over time. For example, if you drop some ink in a glass of water, you will start by seeing the ink quite clearly; given enough time, the ink will diffuse in the water, making it all uniformly coloured. The same principle applies to ions - they want to be uniformly concentrated.

It should be fairly straightforward to work out that a phenomenal number of permutations is possible here. Not only do we have a great number of channels, all with different properties - some switching on and off as properties change around the cell - but we also have the natural flow of ions being affected by the membrane's potential and the concentration gradient, all of which are changing over time. To make matters worse, factors interact with each other such that even if you have simple models to explain each aspect individually, the overall behaviour is still incredibly complex.

Now imagine more than 50 thousand such ion channels - of over one hundred (known) types - in just a single neuron and you are starting to get an idea of the magnitude of the task.

Equivalent Circuit for a Patch of Membrane

But lets return to simplicity. The very clever people determined that it is possible to model the behaviour of ions and its electric effects by thinking of it as an electric circuit. Taking a patch of membrane as an example, it can be visualised as an electric circuit like so:

Cell_membrane_equivalent_circuit.svg

Figure 1: Source: Wikipedia, Membrane Potential

What this diagram tells us is that the membrane itself acts as a capacitor, with its capacitance determined by the properties of the lipid bilayer. We didn't really discuss the lipid bilayer before so perhaps a short introduction is in order. The membrane is made up of two sheets of lipids (think fatty acids), which when layered so, have interesting properties: the outside of the sheets are impermeable to most things such as water molecules and ions. The membrane itself is pretty thin, at around 5nm.

The membrane capacitance is considered constant. We then have a series of ion channels: sodium, potassium, chlorine, calcium. Each of these can be thought of as a pairing of a resistor with variable conductance coupled with a battery. Note that the resistor and the battery are in series, but the ion channels themselves form a parallel circuit. The voltages for each pathway are determined by the different concentrations of the ion inside and outside the cell.

If we further assume fixed ion concentrations and passive ion channels, we can perform an additional simplification on the circuit above and we finally end up with an RC Circuit:

Cell_membrane_reduced_circuit.svg

Figure 2: Source: Wikipedia, Membrane Potential

The circuit now has one resistance, which we call the membrane resistance, and a membrane battery.

What next?

Hopefully you can start to see both the complexity around modeling neurons and the necessity to create simpler models to make them computationally feasible - just look at the amount of simplification that was required for us to get to an RC Circuit!

But at least we can now look forward to implementing LIF.

Created: 2015-09-07 Mon 17:12

Emacs 24.5.1 (Org mode 8.2.10)

Validate

Saturday, September 05, 2015

Nerd Food: Neurons for Computer Geeks - Part IV: More Electricity

Nerd Food: Neurons for Computer Geeks - Part IV: More Electricity

Part I of this series looked at a neuron from above; Part II attempted to give us the fundamental building blocks in electricity required to get us on the road to modeling neurons. We did a quick interlude with a bit of coding in part III but now, sadly, we must return to boring theory once more.

Now that we grok the basics of electricity, we need to turn our attention to the RC circuit. As we shall see, this circuit is of particular interest when modeling neurons. The RC circuit is so called because it is a circuit, and it is composed of a Resistor and a Capacitor. We've already got some vague understanding of circuits and resistors, so lets start by having a look at this new crazy critter, the capacitor.

Capacitors

Just like the battery is a source of current, one can think of the capacitor as a temporary store of current. If you plug a capacitor into a circuit with just a battery, it will start to "accumulate" charge over time, up to a "maximum limit". But how exactly does this process work?

In simple terms, the capacitor is made up of two metal plates, one of which will connect to the positive end of the battery and another which connects to the negative end. At the positive end, the metal plate will start to lose negative charges because these are attracted to the positive end of the battery. This will make this metal plate positively charged. Similarly, at the negative end, the plate will start to accumulate negative charges. This happens because the electrons are repelled by the negative end of the battery. Whilst this process is taking place, the capacitor is charging.

At some point, the process reaches a kind of equilibrium, whereby the electrons in the positively charged plate are attracted equally to the plate as they are to the positive end of the battery, and thus stop flowing. At this point we say the capacitor is charged. It is interesting to note that both plates of the capacitor end up with the same "total" charge but different signs (i.e. -q and +q).

Capacitance

We mentioned a "maximum limit". A few things control this limit: how big the plates are, how much space there is between them and the kind of material we place between them, if any. The bigger the plates and the closer they are - without touching - the more you can store in the capacitor. The material used for the plates is, of course, of great importance too - it must be some kind of metal good at conducting.

In a more technical language, this notion of a limit is captured by the concept of capacitance, and is given by the following formula:

\begin{align} C = \frac{q}{V} \end{align}

Lets break it down to its components to see what the formula is trying to tell us. The role of V is to inform us about the potential difference between the two plates. This much is easy to grasp; since one plate is positively charged and other negatively charged, it is therefore straightforward to imagine that a charge will have a different electric potential in each plate, and thus that there will be an electric potential difference between them. q tells us about the magnitude of the charges that we placed on the plates - i.e. ignoring the sign. It wouldn't be to great a leap to conceive that plates with a larger surface area would probably have more "space" for charges and so a larger q - and vice-versa.

Capacitance is then the ratio between these two things; a measure of how much electric charge one can store for a given potential difference. It may not be very obvious from this formula, but capacitance is constant. That is to say, a given capacitor has a capacitance, influenced by the properties described above. This formula does not describe the discharging or charging process - but of course, capacitance is used in the formulas that describe those.

Capacitance is measured in SI units of Farads, denoted by the letter F. A farad is 1 coulomb over 1 volt:

\begin{align} 1F = \frac{C}{V} \end{align}

Capacitors and Current

After charging a capacitor, one may be tempted to discharge it. For that one could construct a simple circuit with just the capacitor. Once the circuit is closed, the negative charges will start to flow to the positively charged plate, at full speed - minus the resistance of the material. Soon enough both plates would be made neutral. At first glance, this may appear to be very similar to our previous circuit with a battery. However, there is one crucial difference: the battery circuit had a constant voltage and a constant current (for a theoretical battery) whereas a circuit with a discharging capacitor has voltage and current that decay over time. By "decaying", all we really mean is that we start at some arbitrarily high value and we move towards zero over a period of time. This makes intuitive sense: you cannot discharge the capacitor forever; and, as you discharge it, the voltage starts to decrease - for there are less charges in the plates and so less potential difference - and similarly, so does the current - for there is less "pressure" to make the charges flow.

This intuition is formally captured by the following equation:

\begin{align} I(t) = C \frac{dV(t)}{dt} \end{align}

I'm rather afraid that, at this juncture, we have no choice but to introduce Calculus. A proper explanation of Calculus a tad outside the remit of these posts, so instead we will have to make do with some common-sense but extremely hand-waved interpretations of the ideas behind it. If you are interested in a light-hearted but still comprehensive treatment of the subject, perhaps A Gentle Introduction To Learning Calculus may be to your liking.

Let's start by taking a slightly different representation of the formula above and then compare these two formulas.

\begin{align} i = C \frac{dv}{dt} \end{align}

In the first case we are talking about the current I, which normally is some kind of average current over some unspecified period. Up to now, time didn't really matter - so we got away with just talking about I in these general terms. This was the case with the Ohm's Law in part II. However, as we've seen, it is not so with capacitors - so we need to make the current specific to a point in time. For that we supply an "argument" to I - I(t); here, a mathematician would say that that I is a function of time. In the second case, we make use of i, which is the instantaneous current through the capacitor. The idea is that, somehow, we are able to know - for any point in time - what the instantaneous current is.

How we achieve that is via the magic of Calculus. The expression dv/dt in the second formula provides us with the instantaneous rate of change of the voltage over time. The same notion can be applied to V, as per first formula.

These formulas may sound awfully complicated, but what they are trying to tell us is that the capacitor's current has the following properties:

  • it varies as a "function" of time; that is to say, different time points have different currents. Well, that's pretty consistent with our simplistic notion of a decaying current.
  • it is "scaled" by the capacitor's capacitance C; "bigger" capacitors can hold on to higher currents for longer when compared to "smaller" capacitors.
  • the change in electric potential difference varies as a function of time. This is subtle but also makes sense: we imagined some kind of decay for our voltage, but there was nothing to say the decay would remain constant until we reached zero. This formula tells us it does not; voltage may decrease faster or slower at different points in time.

Circuits: Parallel and Series

The RC circuit can appear in a parallel or series form, so its a good time to introduce these concepts. One way we can connect circuits is in series; that is, all components are connected along a single path, such that the current flows through all of them, one after the other. If any component fails, the flow will cease.

This is best understood by way of example. Lets imagine the canonical example of a battery - our old friend the 1.5V AA battery - and a three small light bulbs. A circuit that connects them in series would be made up of a cable segment plugged onto one of the battery's terminals - say +, then connected to the first light bulb. A second cable segment would then connect this light bulb to another light bulb, followed by another segment and another light bulb. Finally, a cable segment would connect the light build to the other battery terminal - say -. Graphically - and pardoning my inability to use Dia to create circuit diagrams - it would look more or less like this:

series_circuit.png

Figure 1: Series circuit. Source: Author

This circuit has a few interesting properties. First, if any of the light bulbs fail, all of them will stop working because the circuit is no longer closed. Second, if one were to add more and more light bulbs, the brightness of each light bulb will start to decrease. This is because each light bulb is in effect a resistor - the light shining being a byproduct of said resistance - and so they are each decreasing the current. So it is that in a series circuit the total resistance is given by the sum of all individual resistances, and the current is the same for all elements.

Parallel circuits are a bit different. The idea is that two or more components are connected to the circuit in parallel, i.e. there are two or more paths along which the current can flow at the same time. So we'd have to modify our example to have a path to each of the light bulbs which exists in parallel to the main path - quite literally a segment of cable that connects the other segments of cable, more or less like so:

parallel_circuit.png

Figure 2: Parallel circuit. Source: Author

Here you can see that if a bulb fails, there is still a closed loop in which current can flow, so the other bulbs should be unaffected. This also means that the voltage is the same for all components in the circuit. Current and resistance are now "relative" to each component, and it is possible to compute the overall current for the circuit via Kirchhoff's Current Law. Simplifying it, it means that the current for the circuit is the sum of all currents flowing through each component.

This will become significant later on when we finally return to the world of neurons.

The RC Circuit

With all of this we can now move to the RC circuit. In its simplest form, the circuit has a source of current with a resistor and a capacitor:

Discharging_capacitor.svg

Figure 3: Source: Wikipedia, RC circuit

Let's try to understand how the capacitor's voltage will behave over time. This circuit is rather similar to the one we analysed when discussing capacitance, with the exception that we now have a resistor as well. But in order to understand this, we must return to Kirchhoff's current law, which we hand-waved a few paragraphs ago. Wikipedia tells us that:

The algebraic sum of currents in a network of conductors meeting at a point is zero.

One way to understand this statement is to think that the total quantity of current entering a junction point must be identical to the total quantity leaving that junction point. If we consider entering to be positive and leaving to be negative, that means that adding the two together must yield zero.

Because of Kirchhoff's law, we can state that, for the positive terminal of the capacitor:

\begin{align} i_c(t) + i_r(t) = 0 \end{align}

That is: at any particular point in time t, the current flowing through the capacitor added to the current flowing through the resistor must sum to zero. However, we can now make use of the previous formulas; after all, our section on capacitance taught us that:

\begin{align} i_c(t) = C \frac{dv(t)}{dt} \end{align}

And making use of Ohm's Law we can also say that:

\begin{align} i_r(t) = \frac{v(t)}{R} \end{align}

So we can expand the original formula to:

\begin{align} C \frac{dv(t)}{dt} + \frac{v(t)}{R} \end{align}

Or:

\begin{align} C \frac{dV}{dt} + \frac{V}{R} \end{align}

I'm not actually going to follow the remaining steps to compute V, but you can see them here and they are fairly straighforward, or at least as straightforward as calculus gets. The key point is, when you solve the differential equation for V, you get:

\begin{align} V(t) = V_0e^{-\frac{t}{RC}} \end{align}

With V0 being voltage when time is zero. This is called the circuit's natural response. This equation is very important. Note that we are now able to describe the behaviour of voltage over time with just a few inputs: the starting voltage, the time, the resistance and the capacitance.

A second thing falls off of this equation: the RC Time constant, or τ. It is given by:

\begin{align} \tau = RC \end{align}

The Time Constant is described in a very useful way in this page, so I'll just quote them and their chart here:

The time required to charge a capacitor to 63 percent (actually 63.2 percent) of full charge or to discharge it to 37 percent (actually 36.8 percent) of its initial voltage is known as the TIME CONSTANT (TC) of the circuit.

32NE0159.GIF

Figure 4: The RC Time constant. Source: Concepts of alternating current

What next?

Now we understand the basic behaviour of the RC Circuit, together with a vague understanding of the maths that describe it, we need to return to the neuron's morphology. Stay tuned.

Created: 2015-09-05 Sat 18:56

Emacs 24.5.1 (Org mode 8.2.10)

Validate

Friday, September 04, 2015

Nerd Food: Neurons for Computer Geeks - Part III: Coding Interlude

Nerd Food: Neurons for Computer Geeks - Part III: Coding Interlude

If you are anything like me, the first two parts of this series have already bored you silly with theory (Part I, Part II) and you are now hankering for some code - any code - to take away the pain. So part III is here to do exactly that. However, let me prefix that grandiose statement by saying this is not the best code you will ever see. Rather, its just a quick hack to introduce a few of the technologies we will make use of for the remainder of these series, namely:

  • CMake and Ninja: this is how we will build our code.
  • Wt: provides a quick way to knock-up a web frontend for C++ code.
  • Boost: in particular Boost Units and later on Boost OdeInt. Provides us with the foundations for our numeric work.

What I mean by a "quick hack" is: there is no validation, no unit tests, no "sound architecture" and none of the things you'd expect from production code. But it should serve as an introduction to modeling in C++.

All the code is available in GitHub under neurite. Lets have a quick look at the project structure.

CMake

We just took a slimmed down version of the Dogen build system to build this code. We could have gotten away with a much simpler CMake setup, but I intend to use it for the remainder of this series so that's why its a bit more complex than what you'd expect. It is made up of the following files:

  • Top-level CMakeLists.txt: ensures all of the dependencies can be found and configured for building, sets up the version number and debug/release builds.
  • build/cmake: any Find* scripts that are not supplied with the CMake distribution. We Google for these and copied them here.
  • projects/CMakeLists.txt: sets up all of the compiler and linker flags we need to build the project. Uses pretty aggressive flags such as -Wall and -Werror.
  • projects/ohms_law/src/CMakeLists.txt: our actual project, the bit that matters for this article.

ohms_law Project

The project is made up of two classes, in files calculator.[hc]pp and view.[hc]pp. The names are fairly arbitrary but they try to separate View from Model: the user interface is in view and the "number crunching" is in calculator.

The View

Lets have a quick look at view. In the header file we simply define a Wt application with a few widgets:

class view : public Wt::WApplication {
public:
  view(const Wt::WEnvironment& env);

private:
  Wt::WLineEdit* current_;
  Wt::WLineEdit* resistance_;
  Wt::WText* result_;
};

It is implemented in an equally trivial manner. We just setup the widgets and hook them together. Finally, we create a trivial event handler that performs the "computations" when the button is clicked.

view::view(const Wt::WEnvironment& env) : Wt::WApplication(env) {
  setTitle("Ohm's Law Calculator");

  root()->addWidget(new Wt::WText("Current: "));
  current_ = new Wt::WLineEdit(root());
  current_->setValidator(new Wt::WDoubleValidator());
  current_->setFocus();

  root()->addWidget(new Wt::WText("Resistance: "));
  resistance_ = new Wt::WLineEdit(root());
  resistance_->setValidator(new Wt::WDoubleValidator());

  Wt::WPushButton* button = new Wt::WPushButton("Calculate!", root());
  button->setMargin(5, Wt::Left);
  root()->addWidget(new Wt::WBreak());
  result_ = new Wt::WText(root());

  button->clicked().connect([&](Wt::WMouseEvent&) {
      const auto current(boost::lexical_cast<double>(current_->text()));
      const auto resistance(boost::lexical_cast<double>(resistance_->text()));

      calculator c;
      const auto voltage(c.voltage(resistance, current));
      const auto s(boost::lexical_cast<std::string>(voltage));
      result_->setText("Voltage: " + s);
    });
}

The Model

The model is equally as simple as the view. It is made up of a single class, calculator, whose job is to compute the voltage using Ohm's Law. It does this by making use of Boost Units. This is obviously not necessary, but we wanted to take the opportunity to explore this library as part of this series of articles.

double calculator::
voltage(const double resistance, const double current) const {
  boost::units::quantity<boost::units::si::resistance>
    R(resistance * boost::units::si::ohms);
  boost::units::quantity<boost::units::si::current>
    I(current * boost::units::si::amperes);
  auto V(R * I);
  return V.value();
}

Compiling and Running

If you are on a debian-based distribution, you can do the following steps to get the code up and running. First install the dependencies:

$ sudo apt-get install libboost-all-dev witty-dev ninja-build cmake clang-3.5

Then obtain the source code from GitHub:

$ git clone https://github.com/mcraveiro/neurite.git

Now you can build it:

cd neurite
mkdir output
cd output
cmake ../ -G Ninja
ninja -j5

If all went according to plan, you should be able to run it:

$ stage/bin/neurite_ohms_law --docroot . --http-address 0.0.0.0 --http-port 8080

Now using a web browser such as chrome, connect to http://127.0.0.1:8080 and you should see a "shiny" Ohm's Law calculator! Sorry, just had to be done to take away the boredom a little bit. Lets proceed with the more serious matters at hand, with the promise that the real code will come later on.

Created: 2015-09-04 Fri 17:16

Emacs 24.5.1 (Org mode 8.2.10)

Validate