Interesting Articles (5)

Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues

This is another interesting use-case of Blender in neuroscience to study the elements and statistical properties of neural tissue from data obtained with an electron microscope.

Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three-dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug-ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. J. Comp. Neurol., 2015. © 2015 Wiley Periodicals, Inc.

Parametric Anatomical Modeling

I am proud to announce the start of a new long term project that begins with a paper that came out recently.

Pyka M, Klatt S, Cheng S (2014) Parametric Anatomical Modeling: A method for modeling the anatomical layout of neurons and their projections, Frontiers in Neuroanatomy.

PAM is the foundation for studying generic and specific coding principles of neural networks. The idea is to create artificial neural networks whose connectivity properties and connection lengths result from reconstructed anatomical data. Using a 3d environment to model and describe real neural networks, we can describe complex relationships between layers of neurons. With PAM, we can account for connectivity properties that are influenced by global and local anatomical axes, non-linear relationships between distances of neurons and their connection lengths, and we can incorporate many experimental data, like images from tracer studies, directly into our model.

In the long run, I hope that PAM can help to understand how the structure but also how self-organizing principles (like diverse forms of plasticity) contribute to the topology and thereby also to the function of the network. In particular, we are interested in the functioning of the hippocampus.

The project website is hosted on Bitbucket Github.

Updated on 26.05.2015:

Replaced link to Bitbuckt by link to Github.

3D printed Hippocampus at Shapeways

hipp_01If you, like me, work on the hippocampus, you know that it is incredibly hard to understand how the hippocampal formation actually looks like in 3D. Dentate Gyrus and CA3-CA1 lie in the rat along the dorsal-ventral axis but also traverse the medial-lateral and anterior-posterior axis.

Now you can order a 3d print of the hippocampus at Shapeways which helps a lot to understand how the neural layers look like and how they are interwoven.

Great visualizations by Janet Iwasa

This TED talks shows some impressive visualizations of molecular reactions. In particular the emergence of this football-like shape (at around 1:16) is astonishing.

The software they use, called  molecular flipbook, has been developed by a team around Janet Iwasa and is based on Blender. This is another great example of Blender becoming more and more a helpful tool for scientists.

Blender Addon: Map via UV

Here is a little operator that I wrote for Blender. This operator is basically a side-result of a larger project I am working on but I thought it is worth sharing this as it might be useful for others as well.

The aim is to bring a given mesh-surface into another shape. For example, in the picture below, I would like to transform the right mesh (the origin) into the form of the left mesh (the target) while its internal topology should be maintained.


I found that the easiest way to achieve this is, to unwrap both objects and simply define in UV-space how the right mesh should cover the left mesh.


The only important point here is that any point in the UV-map of the origin mesh must be in the area of the UV-map of the target-mesh. Then, hit “map via UV” and you got


The addon and additional information as well as some debugging issues can be found and discussed in this BlenderArtists-Thread.