Monday, November 30, 2015

Nerd Food: Tooling in Computational Neuroscience - Part II: Microscopy

Nerd Food: Tooling in Computational Neuroscience - Part II: Microscopy

Research is what I'm doing when I don't know what I'm doing.
Wernher von Braun

Welcome to the second instalment of our second series on Computational Neuroscience for lay people. You can find the first post of the previous series here, and the first post of the current series here. As you'd expect, this second series is slightly more advanced, and, as such, it is peppered with unavoidable technical jargon. Having said that, we shall continue to pursue our ambitious target of making things as easy to parse as possible (but no easier). If you read the first series, the second should hopefully make some sense.1

Our last post discussed Computational Neuroscience as a discipline, and the kind of things one may want to do in this field. We also spoke about models and their composition, and the desirable properties of a platform that runs simulations of said models. However, it occurred to me that we should probably build some kind of "end-to-end" understanding; that is, by starting with the simulations and models we are missing a vital link with the physical (i.e. non-computational) world. To put matters right, this part attempts to provide a high-level introduction on how data is acquired from the real world and can then be used - amongst other things - to inform the modeling process.

Macro and Micro Microworlds

For the purposes of this post, the data gathering process starts with the microscope. Of course, keep in mind that we are focusing only on the morphology at present - the shape and the structures that make up the neuron - so we are ignoring other important activities in the lab. For instance, one can conduct experiments to measure voltage in a neuron, and these measurements provide data for the functional aspects of the model. Alas, we will skip these for now, with the promise of returning to them at a later date2.

So, microscopes then. Microscopy is the technical name for the observation work done with the microscope. Because neurons are so small - some 4 to 100 microns in size - only certain types of microscopes are suitable to perform neuronal microscopy. To make matters worse, the sub-structures inside the neuron are an important area of study and they can be ridiculously small: a dentritic spine - the minute protrusions that come out of the dendrites - can be as tiny as 500 nanometres; the lipid bylayer itself is only 2 or 3 nanometres thick, so you can imagine how incredibly small ion channels and pumps are. Yet these are some of the things we want to observe and measure. Lets call this the "micro" work. On the other hand, we also want to understand connectivity and other larger structures, as well as perform observations of the evolution of the cell and so on. Lets call this the "macro" work. These are not technical terms, by the by, just so we can orient ourselves. So, how does one go about observing these differently sized microworlds?

F1.large.jpg

Figure 1: Example of measurements one may want to perform on a dendrite. Source: Reversal of long-term dendritic spine alterations in Alzheimer disease models

Optical Microscopy

The "macro" work is usually done using the Optical "family" of microscopes, which is what most of us think of when hearing the word microscope. As it was with Van Leeuwenhoek's tool in the sixteen hundreds, so it is that today's optical microscopes still rely on light and lenses to perform observations. Needless to say, things did evolve a fair bit since then, but standard optical microscopy has not completely removed the shackles of its limitations. These are of three kinds, as Wikipedia helpfully tells us: a) the objects we want to observe must be dark or strongly refracting - a problem, since the internal structures of the cell are transparent; b) visible light's diffraction limit means that we cannot go much lower than 200 nanometres - pretty impressive, but unfortunately not quite low enough for detailed sub-structure analysis; and c) out of focus light hampers image clarity.

Workarounds to these limitations have been found in the guise of techniques, with the aim of augmenting the abilities of standard optical microscopy. There are many of these techniques. There is the Confocal Microscopy3 - improving resolution and contrast; the Fluorescence microscope, which uses a sub-diffraction technique to reconstruct some of the detail that is missing due to diffraction; or the incredible-looking movies produced by Multiphoton Microscopy. And of course, it is possible to combine multiple techniques in a single microscope, as is the case with the Multiphoton Fluorescence Microscopes (MTMs) and many others.

In fact, given all of these developments, it seems there is no sign of optical microscopy dying out. Presumably some of this is due to the relative lower cost of this approach as well as to the ease of use. In addition, optical microscopy is complementary to the other more expensive types of microscopes; it is the perfect tool for "macro" work that can then help to point out where to do "micro" work. For example, you can use an optical microscope to assess the larger structures and see how they evolve over time, and eventually decide on specific areas that require more detailed analysis. And when you do, you need a completely different kind of microscope.

Electron Microscopy

When you need really high-resolution, there is only one tool to turn to: the Electron Microscope (EM). This crazy critter can provide insane levels of magnification by using a beam of electrons instead of visible light. Just how insane, you ask? Well, if you think that an optical microscope lives in the range of 1500x to 2000x - that is, can magnify a sample up to two thousand times - an EM can magnify as much as 10 million times, and provide a sub-nanometre resolution4. It is mind boggling. If fact, we've already seen images of atoms using EM in part II, but perhaps it wasn't easy to appreciate just how amazing a feat that is.

Of course, EM is itself a family - and a large one at that, with many and diverse members. As with optical microscopy, each member of the family specialises on a given technique or combination of techniques. For example, the Scanning Electron Microscope (SEM) performs a scan of the object under study, and has a resolution of 1 nanometre or higher; the Scanning Confocal Electron Microscope (SCEM) uses the same confocal technique mentioned above to provide higher depth resolution; and Transmission Electron Microscopy (TEM) has the ability to penetrate inside the specimen during the imagining process, given samples with thickness of 100 nanometres or less.

A couple of noteworthy points are required at this juncture. First, whilst some of these EM techniques may sound new and exciting, most have been around for a very long time; it just seems they keep getting better and better as they mature. For example, TEM was used in the fifties to show that neurons communicate over synaptic junctions but its still wildly popular today. Secondly, its important to understand that the entire imaging process is not at all trivial - certainly not for TEM, nor EM in general and probably not for Optical Microscopy either. It just is a very labour intensive and very specialised process - most likely done by an expert human neuroanatomist - and the difficulties range from the chemical preparation of the samples all the way up to creating the images. The end product may give the impression it was easy to produce, but easy it was not.

At any rate, whatever the technical details, the fact is that the imagery that results from all these advances is truly evocative - haunting, even. Take this image produced by SEM:

Personally, I think it is incredibly beautiful; simultaneously awe-inspiring and depressing because it really conveys the messiness and complexity of wetware. By way of contrast, look at the neatness of man-made micro-structures:

bluegeneq%20x%20420.jpg

Figure 3: The BlueGene/Q chip. Source: IBM plants transactional memory in CPU

Stacks and Stacks of 'Em

Technically, pictures like the ones above are called micrographs. As you can see in the neuron micrograph, these images provide a great visual description of the topology of the object we are trying to study. You also may notice a slight coloration of the cell in that picture. This is most likely due to the fact that the people doing the analysis stain the neuron to make it easier to image. Now, in practice - at least as far as I have seen, which is not very far at all, to be fair - 2D grayscale images are preferred by researchers to the nice, Public Relations friendly pictures like the one above; those appear to be more useful for magazine covers. The working micrographs are not quite as exciting to the untrained eye but very useful to the professionals. Here's an example:

fetch.php?w=900&tok=d88a10&media=wiki:biomed-neurons.jpg

Figure 4: The left-hand side shows the original micrograph. On the right-hand side it shows the result of processing it with machine learning. Source: Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images

Let's focus on the left-hand side of this image for the moment. It was taken using ssTEM - serial-section TEM, an evolutionary step in TEM. The ss part of ssTEM is helpful in creating stacks of images, which is why you see the little drawings on the left of the picture; they are there to give you the idea that the top-most image is one of 30 in a stack5. The process of producing the images above was as follows: they started off with a neuronal tissue sample, which is prepared for observation. The sample had 1.5 micrometres and was then sectioned into 30 slices of 50 nanometres. Each of these slices was imaged, at a resolution of 4x4 nanometres per pixel.

As you can imagine, this work is extremely sensitive to measurement error. The trick is to ensure there is some kind of visual continuity between images so that you can recreate a 3D model from the 2D slices. This means for instance that if you are trying to figure out connectivity, you need some way to relate a dendrite to it's soma and say to the axon of the neuron it connects to - and that's one of the reasons why the slices have to be so thin. It would be no good if the pictures miss this information out as you will not be able to recreate the connectivity faithfully. This is actually really difficult to achieve in practice due to the minute sizes involved; a slight tremor that displaces the sample by some nanometres would cause shifts in alignment; even with the high-precision the tools have, you can imagine that there is always some kind of movement in the sample's position as part of the slicing process.

Images in a stack are normally stored using traditional formats such as TIFF6. You can see an example of the raw images in a stack here. Its worth noticing that, even though the images are 2D grey-scale, since the pixel size is only a few nanometres wide (4x4 in this case), the full size of an image is very large. Indeed, the latest generation of microscopes produce stacks on the 500 Terabyte range, making the processing of the images a "big-data" challenge.

What To Do Once You Got the Images

But back to the task at hand. Once you have the stack, the next logical step is to try to figure out what's what: which objects are in the picture. This is called segmentation and labelling, presumably because you are breaking the one big monolithic picture into discrete objects and give them names. Historically, segmentation has been done manually, but its a painful, slow and error-prone process. Due to this, there is a lot of interest in automation, and it has recently become feasible to do so - what with the abundance of cheap computing resources as well as the advent of "useful" machine learning (rather than the theoretical variety). Cracking this puzzle is gaining traction amongst the programming herds, as you can see by the popularity of challenges such as this one: Segmentation of neuronal structures in EM stacks challenge - ISBI 2012. It is from this challenge we sourced the stack and micrograph above; the right-hand side is the finished product after machine learning processing.

There are also open source packages to help with segmentation. A couple of notable contenders are Fiji and Ilastik. Below is a screenshot of Ilastik.

Figure-2-a.png

Figure 5: Source: Ilastik gallery.

An activity that naturally follows on from segmentation and labelling is reconstruction. The objective of reconstruction is to try to "reconstruct" morphology given the images in the stack. It could involve inferring the missing bits of information by mathematical means or any other kind of analysis which transforms the set of discrete objects spotted by segmentation into something looking more like a bunch of connected neurons.

Once we have a reconstructed model, we can start performing morphometric analysis. As wikipedia tells us, Morphometry is "the quantitative analysis of form"; as you can imagine, there are a lot of useful things one may want to measure in the brain structures and sub-structures such as lengths, volumes, surface area and so on. Some of these measurements can of course be done in 2D, but life is made easier if the model is available in 3D. One such tool is NeuroMorph. It is an open source extension written in Python for the popular open source 3D computer graphics software Blender.

Conclusion

This post was a bit of a world-wind tour of some of the sources of real world data for Computational Neuroscience. As I soon found out, each of these sections could have easily been ten times bigger and still not provide you with a proper overview of the landscape; having said that, I hope that the post at least gives some impression of the terrain and its main features.

From a software engineering perspective, its worth pointing out the lack of standardisation in information exchange. In an ideal world, one would want a pipeline with components to perform each of the steps of the complete process, from data acquisition off of a microscope (either opitical or EM), to segmentation, labelling, reconstruction and finally morphometric analysis. This would then be used as an input to the models. Alas, no such overarching standard appears to exist.

One final point in terms of Free and Open Source Software (FOSS). On one hand, it is encouraging to see the large number of FOSS tools and programs being used. Unfortunately - at least for the lovers of Free Software - there are also some proprietary tools that are widely used such as NeuroLucida. Since the software is so specialised, the fear is that in the future, the better funded commercial enterprises will take over more and more of the space.

That's all for now. Don't forget to tune in for the next instalment!

Footnotes:

1

As it happens, what we are doing here is to apply a well-established learning methodology called the Feynman Technique. I was blissfully unaware of its existence all this time, even though Feynman is one of my heroes and even though I had read a fair bit about the man. On this topic (and the reason why I came to know about the Feynman Technique), its worth reading Richard Feynman: The Difference Between Knowing the Name of Something and Knowing Something, where Feynman discusses his disappointment with science education in Brazil. Unfortunately the Portuguese and the Brazilian teaching systems have a lot in common - or at least they did when I was younger.

2

Nor is the microscope the only way to figure out what is happening inside the brain. For example, there are neuroimagining techniques which can provide data about both structure and function.

3

Patented by Marvin Minsky, no less - yes, he of Computer Science and AI fame!

4

And, to be fair, sub-nanometre just doesn't quite capture just how low these things can go. For an example, read Electron microscopy at a sub-50 pm resolution.

5

For a more technical but yet short and understandable take, read Uniform Serial Sectioning for Transmission Electron Microscopy.

6

On the topic of formats: its probably time we mention the Open Microscopy Environment (OME). The microscopy world is dominated by hardware and as such its the perfect environment for corporations, their proprietary formats and expensive software packages. The OME guys are trying to buck the trend by creating a suite of open source tools and protocols, and by looking at some of their stuff, they seem to be doing alright.

Created: 2015-11-30 Mon 23:12

Emacs 24.5.1 (Org mode 8.2.10)

Validate

Wednesday, November 11, 2015

Nerd Food: Tooling in Computational Neuroscience - Part I: NEURON

Nerd Food: Tooling in Computational Neuroscience - Part I

In the previous series of posts we did a build up of theory - right up to the point where we were just about able to make sense of Integrate and Fire - one of the simpler families of neuron models. The series used a reductionist approach - or bottom up, if you prefer1. We are now starting a new series with the opposite take, this time coming at it from the top. The objective is to provide a (very) high-level overview - in laymen's terms, still - of a few of the "platforms" used in computational neuroscience. As this is a rather large topic, we'll try to tackle a couple of platforms each post, discussing a little bit of their history, purpose and limitations - whilst trying to maintain a focus on file formats or DSLs. "File formats" may not sound particularly exciting at first glance, but it is important to keep in mind that these are instances of meta-models of the problem domain in question, and as such, their expressiveness is very important. Understand those and you've understood a great deal about the domain and about the engineering choices of those involved.

But first, let's introduce Computational Neuroscience.

Computers and the Brain

Part V of our previous series discussed some of the reasons why one would want to model neurons (section Brief Context on Modeling). What we did not mention is that there is a whole scientific discipline dedicated to this endeavour, called Computational Neuroscience. Wikipedia has a pretty good working definition, which we will take wholesale. It states:

Computational neuroscience […] is the study of brain function in terms of the information processing properties of the structures that make up the nervous system. It is an interdisciplinary science that links the diverse fields of neuroscience, cognitive science, and psychology with electrical engineering, computer science, mathematics, and physics.

Computational neuroscience is distinct from psychological connectionism and from learning theories of disciplines such as machine learning, neural networks, and computational learning theory in that it emphasizes descriptions of functional and biologically realistic neurons (and neural systems) and their physiology and dynamics. These models capture the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, proteins, and chemical coupling to network oscillations, columnar and topographic architecture, and learning and memory.

These computational models are used to frame hypotheses that can be directly tested by biological or psychological experiments.

Lots of big words, of course, but hopefully they make some sense after the previous posts. If not, don't despair; what they all hint at is an "interdisciplinary" effort to create biologically plausible models, and to use these to provide insights on how the brain is performing certain functions. Think of the Computational Neuroscientist as the right-hand person of the Neuroscientist - the "computer guy" to the "business guy", if you like. The Neuroscientist (particularly the experimental Neuroscientist) gets his or her hands messy with wetware and experiments, which end up providing data and a better biological understanding; the Computational Neuroscientist takes these and uses them to make improved computer models, which are used to test hypothesis or to make new ones, which can then validated by experiments and so on, in a virtuous feedback loop.2 Where the "interdisciplinary" part comes in is that many of the people doing the role of "computer guys" are actually not computer scientists but instead come from a variety of backgrounds such as biology, physics, chemistry and so on. This variety adds a lot of value to the discipline because the brain is such a complex organ; understanding it requires all kinds of skills - and then some.

It's Models All the Way Down

At the core, then, the work of the Computational Neuroscientist is to create models. Of course, as we already seen, one does not just walk straight into Mordor and starts creating the "most biologically plausible" model of the brain possible; all models must have a scope as narrow as possible, if they are to become a) understandable and b) computationally feasible. Thus engineering trade-offs are crucial to the discipline.

Also, it is important to understand that creating a model does not always imply writing things from scratch. Instead, most practitioners rely on a wealth of software available, all with different advantages and disadvantages.

At this juncture you are probably wondering just what exactly are these "models" we speak so much of. Are they just equations like IaF? Well, yes and no. As it happens, all models have roughly the following structure:

  • a morphology definition: we've already spoken a bit about morphology; think of it as the definition of the entities that exist in your model, their characteristics and relationships. This is actually closer to what we, computer scientists think the word modeling means. For example, the morphology defines how many neurons you have, how many axons and dendrites, connectivity, spatial positioning and so on.
  • a functional, mathematical or physical definition: I've heard it named in many ways, but fundamentally, what it boils down to is the definition of the equations that your model requires. For example, are you modeling electrical properties or reaction/diffusion?

For the simpler models, the morphology gets somewhat obscured - after all, in LIF, there is very little information about a neuron because all we are interested in are the spikes. For other models, a lot of morphological details are required.

The Tooling Landscape

Idealised…

It is important to keep in mind that these models are to be used in a simulation; that is, we are going to run the program for a period of time (hours or days) and observe different aspects of its behaviour. Thus the functional definition of the model provides the equations that describe the dynamics of the system being simulated and the morphology will provide some of the inputs for those equations.

From here one can start sketch the requirements for a system for the Computational Neuroscientist:

  • a platform of some kind to provide simulation control: starting, stopping, re-running, storing the results and so on. As the simulations can take a long time to run, the data sets can be quite large - on the hundreds of gigs range - so efficiently handling of the output data is a must.
  • some kind of DSL that provides a user friendly way to define their models, ideally with a graphical user interface that helps author the DSL. The DSL must cover the two aspects we mention above.
  • efficient libraries of numerical routines to help solve the equations. The libraries must be exposed in someway to the DSL so that users can make use of these when defining the functional aspects of the model.

Architecturally, the ability to use a cluster or GPUs would of course be very useful, but we shall ignore those aspects for now. Given this idealised platform, we can now make a bit more sense of what actually exists in the wild.

… vs Actual

The multidisciplinary nature of Computational Neuroscience poses some challenges when it comes to software development: as mentioned, many of the practitioners in the field do not have a Software Engineering background; of those that do have, most tend not to have strong biology and neuroscience backgrounds. As a result, the landscape is fragmented and the quality is uneven. On one side, most of the software is open source, making reuse a lot less of a problem. On the other hand, things such as continuous integration, version control, portability, user interface guide lines, organised releases, packaging and so on are still lagging behind most "regular" Free and Open Source projects3.

In some ways, to enter Computational Neuroscience is a bit like travelling in time to a era before git, before GitHub, before Travis and all other things we take for granted. Not everywhere, of course, but still in quite a few places, particularly with the older and more popular projects. One cannot help but get the feeling that the field could do with some of the general energy we have in the FOSS community, but the technical barriers to contributing tend to be large since the domain is so complex.

So after all of this boring introductory material, we can finally look at our first system.

NEURON

Having to choose, one feels compelled to start with NEURON - the most venerable of the lot, with roots in the 80s4. NEURON is a simulation environment with great depth of functionality and a comprehensive user manual published as a (non-free) book. For the less wealthy, an overview paper is available, as are many other online resources. The software itself is fully open source, with a public mercurial repo.

As with many of the older tools in this field, NEURON development has not quite kept up the pace with the latest and greatest. For instance, it still has a Motif'esque look to its UI but, alas, do not be fooled - its not Motif but InterViews - a technology I never heard of, but seems to have been popular in the 80's and early 90's. One fears that NEURON may just be the last widely used program relying on InterViews - and the fact that they carry their own fork of it does not make me hopeful.

subset0.gif

Figure 1: Source: NEURON Cell Builder

However, once one goes past these layers of legacy, the domain functionality of the tool is very impressive. This goes some way to explain why so many people rely on it daily and why so many papers have been written using it - over 600 papers at the last count.

Whilst NEURON is vast, we are particularly interested in only two aspects of it: hoc and mod (in its many incarnations). These are the files that can be used to define models.

Hoc

Hoc has a fascinating history and a pedigree to match. It is actually the creation of Kernighan and Pike, two UNIX luminaries, and has as contenders tools like bc and dc and so on. NEURON took hoc and extended it both in terms of syntax as well as the number of available functions; NEURON Hoc is now an interpreted object oriented language, albeit with some limitations such as lack of inheritance. Programs written in hoc execute in an interpreter called oc. There are a few variations of this interpreter, with different kinds of libraries made available to the user (UI, neuron modeling specific functionality, etc) but the gist of it is the same, and the strong point is the interactive development with rapid feedback. On the GUI versions of the interpreter, the script can specify it's UI elements including input widgets for parameters and widgets to display the output. Hoc is then used as a mix between model/view logic and morphological definition language.

To get a feel for the language, here's a very simple sample from the manual:

create soma    // model topology
access soma    // default section = soma

soma {
   diam = 10   // soma dimensions in um
   L = 10/PI   //   surface area = 100 um^2
}

NMODL

The second language supported by NEURON is NMODL - The NEURON extended MODL (Model Description Language). NMODL is used to specify a physical model in terms of equations such as simultaneous nonlinear algebraic equations, differential equations and so on. In practice, there are actually different versions of NMODL for different NEURON versions, but to keep things simple I'll just abstract these complexities and refer to them as one entity5.

As intimated above, NMODL is a descendant of MODL. As with Hoc, the history of MODL is quite interesting; it was a language was defined by the National Biomedical Simulation Resource to specify models for use with SCoP - the Simulation Control Program6. From what I can gather of SCoP, its main purpose was to make life easier when creating simulations, providing an environment where users could focus on what they were trying to simulate rather than nitty-gritty implementation specific details.

NMODL took MODL syntax and extended it with the primitives required by its domain; for instance, it added the NEURON block to the language, which allows multiple instances of "entities". As with MODL, NMODL is translated into efficient C code and linked against supporting libraries that provide the numerics; the NMODL translator to C also had to take into account the requirement of linking against NEURON libraries rather than SCoP.

The below is a snippet of NMODL code, copied from the NEURON book (chapter 9, listing 9.1):

NEURON {
  SUFFIX leak
  NONSPECIFIC_CURRENT i
  RANGE i, e, g
}

PARAMETER {
  g = 0.001  (siemens/cm2)  < 0, 1e9 >
  e = -65    (millivolt)
}

ASSIGNED {
  i  (milliamp/cm2)
  v  (millivolt)
}

NMODL and hoc are used together to form a model; hoc to provide the UI, parameters and morphology and NMODL to provide the physical modeling. The website ModelDB provides a database of models in a variety of platforms with the main objective of making research reproducible. Here you can see an example of a production NEURON model in its full glory, with a mix of hoc and NMODL files - as well as a few others such as session files, which we can ignore for our purposes.

Thoughts

NEURON is more or less a standard in Computational Neuroscience - together with a few other tools such as GENESIS, which we shall cover later. Embedded deeply in it source code is the domain logic learned painstakingly over several decades. Whilst software engineering-wise it is creaking at the seams, finding a next generation heir will be a non-trivial task given the features of the system, the amount of models that exist out there, and the knowledge and large community that uses it.

Due to this, a solution that a lot of next-generation tools have developed is to use NEURON as a backend, providing a shiny modern frontend and then generating the appropriate hoc and NMODL required by NEURON. This is then executed in a NEURON environment and the results are sent back to the user for visualisation and processing using modern tools. Le Roi Est Mort, Vive Le Roi!

Conclusions

In this first part we've outlined what Computational Neuroscience is all about, what we mean by a model in this context and what services one can expect from a platform in this domain. We also covered the first of such platforms. Tune in for the next instalment where we'll cover more platforms.

Footnotes:

1

I still owe you the final post of that series, coming out soon, hopefully.

2

Of course, once you scratch the surface, things get a bit murkier. Erik De Schutter states:

[…] The term is often used to denote theoretical approaches in neuroscience, focusing on how the brain computes information. Examples are the search for “the neural code”, using experimental, analytical, and (to a limited degree) modeling methods, or theoretical analysis of constraints on brain architecture and function. This theoretical approach is closely linked to systems neuroscience, which studies neural circuit function, most commonly in awake, behaving intact animals, and has no relation at all to systems biology. […] Alternatively, computational neuroscience is about the use of computational approaches to investigate the properties of nervous systems at different levels of detail. Strictly speaking, this implies simulation of numerical models on computers, but usually analytical models are also included […], and experimental verification of models is an important issue. Sometimes this modeling is quite data driven and may involve cycling back and forth between experimental and computational methods.

3

This is a problem that has not gone unnoticed; for instance, this paper provides an interesting and thorough review of the state onion in Computational Neuroscience: Current practice in software development for computational neuroscience and how to improve it. In particular, it explains the dilemmas faced by the maintainers of neuroscience packages.

4

The early story of NEURON is available here; see also the scholarpedia page.

5

See the NMODL page for details, in the history section.

6

As far as I can see, in the SCoP days MODL it was just called the SCoP Language, but as the related paper is under a paywall I can't prove it either way. Paper: SCoP: An interactive simulation control program for micro- and minicomputers, from Springer.

Created: 2015-11-11 Wed 17:59

Emacs 24.5.1 (Org mode 8.2.10)

Validate

Monday, November 09, 2015

Nerd Food: Interesting...

Nerd Food: Interesting…

Time to flush all those tabs again. Some interesting stuff I bumped into recently-ish.

Finance

Startups et. al.

General Coding

Databases

C++

Layman Science

Other

Created: 2015-11-09 Mon 23:57

Emacs 24.5.1 (Org mode 8.2.10)

Validate