Research axes
Datasets
Look for purpose in the place where your deep gladness meets the world's needs (Yogi tea)
During the FRArt-research in 2022 a thread on databases was created that could help to think about the way we want to explore experiments. Some ideas come from experience, some from questions and discussions. All threads lead to more questions and discussion.
Following the principles of the Parisian based collective Oulipo, we consider all types of texts as potential literature.
Some overall questions could be:
- how can we enhance the presence of and relationship with more-than-humans and cosmovision practises? Can we create alternative narratives? Might there be datasets that that incorporate more indigenous, situated and folk knowledge?
how do we create a de-antropomorphised de-centered point of view?
how can we work to undo the histories of violence, histories of exclusion that have already existed? How do we formulate critique on collections of structured digital data that come along with a knot of conventions and traditions from military, colonial and corporate management and control systems? What kind of datasets are not loaded with these kinds of problems?
Possible threads
Focus on non-dominant discourses
Local and real time data (pos/neg)
link generation of text to the waves of the tides, the moon cycle, the quality of the air in certain places or local tree databases:
Activist stories & information
Literature on nature
from the book Radical Botany, by Natania Meeker
Julien Offray de La Mettrie
Anne Richter
Emily Dickenson
Dominique Brancher, Quand l'esprit vient aux plantes
Guy de la Brosse, De la nature, vertu et utilité des plantes (1628)
Cyrano de Bergerac, Les Etats et Empires de La Lune
Cyrano de Bergerac, Les Etats et Empires du Soleil
Octavia Butler, Parabole and the sawer (image of the seed): biopolitics of vegetality
Octavia Butler, Liliths Brood: vegetal matter
Jamaica Kincaid, My Garden: about the question of the colonial history of the garden
Explosive artefacts
How can you critically work with a database that is rooted in colonial, military, Western neoliberal history? How to avoid confirming it, giving it positive attention?
As part of this research trajectory Anaïs Berck was in residency in the Botanical Garden of Meise in March 22. There we encountered texts and databases that inform their practises on a daily basis.
Some interesting texts are:
The Nagoya Protocol on Access and Benefit Sharing (ABS) is a 2010 supplementary agreement to the 1992 Convention on Biological Diversity (CBD). Its aim is the implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources, thereby contributing to the conservation and sustainable use of biodiversity. It sets out obligations for its contracting parties to take measures in relation to access to genetic resources, benefit-sharing and compliance.
In short, it has now become officially illegal to travel to a country, bring back species and exploit them commercially without sharing the benefits with the country of origin.
A fascinating book that specifies all rules, principles and uses to name plants, algae and fungi. It is updated every 4 years during an international conference.
Interviews with scientists working in the Botanical Garden of Meise reveiled a myriad of local and global databases that they use in their daily practises:
GBIF : Global Biodiversity Information Facility - https://www.gbif.org/ : a database from which you can extract the species list per region -- for example Belgium -- the geolocalisations for each species
IPNI : International Plant Names Index - https://www.ipni.org/
JSTOR - https://www.jstor.org/ : 245 institutions on African Type Specimens had free access for people from African Countries. Often people from within Africa don't have access to the data, people outside of Africa pay and it is free for people in Africa.
BGCI : Botanical Garden Centers International - https://www.bgci.org/ : a lot of funding goes to working on this common database
BICIKL - https://bicikl-project.eu/ : the idea is to connect different type of biodiversity data: it is an online, EU platform for researchers without programming skills
Plazi - https://plazi.org/ :a database of literature, taxonomic treatment database with formal texts
Global Biotic Interaction Database - https://www.globalbioticinteractions.org/ : provides open access to finding species interaction data (e.g., predator-prey, pollinator-plant, pathogen-host, parasite-host) by combining existing open datasets using open source software.
ORCID - https://orcid.org/ : a unique identifier for each researcher
Bionomia - https://bionomia.net/ : Bionomia is developed and maintained by David P. Shorthouse using specimen data periodically downloaded from the Global Biodiversity Information Facility (GBIF) and authentication provided by ORCID. It was launched in August 2018 as a submission to the annual Ebbe Nielsen Challenge. Since then, wikidata identifiers were integrated to capture the names, birth, and dates of death for deceased biologists to help maximize downstream data integration, engagement, and as a means to discover errors or inconsistencies in natural history specimen data.
Scholia - https://scholia.toolforge.org/ : you can find the wikidata profile and it gives you a profile based on the wikidata profile. with the number of publication per year, the topics that are concerned and then a few visualisations. where was it published, but also co-author graphs. You can see groups and see how it propagates, f.ex. masters, invasive species... If you want to do something on species, you can probably start from here. You take a species on wikidata, it's a rabbit hole !
Ecological Restauration Alliance of Botanical Gardens - https://www.erabg.org/
ESCONET : European Native Seed Conservation Network - https://cordis.europa.eu/project/id/506109/reporting
Darwin Core - https://dwc.tdwg.org/ : Darwin Core is a standard maintained by the Darwin Core Maintenance Interest Group. It includes a glossary of terms (in other contexts these might be called properties, elements, fields, columns, attributes, or concepts) intended to facilitate the sharing of information about biological diversity by providing identifiers, labels, and definitions. Darwin Core is primarily based on taxa, their occurrence in nature as documented by observations, specimens, samples, and related information.
Datascientist Anne-Laure analysed the herbarium on the stories it does and does not tell, the variables that are not given attention to (f. ex. local names, environment where the plant has been found, description of the tree/bush/plant): https://cloud.local.algoliterarypublishing.net/s/MBffHtzwGekNqN8
Guidelines for Collaboration
Note : These guidelines are adapted from the Constant’s Collaboration Guidelines: https://constantvzw.org/wefts/orientationspourcollaboration.en.html
Anaïs Berck was launched by An Mertens in October 2019. The idea of the collective grew out of An's 13 year long commitment as a member of the coordinating team and activities of Constant. Therefore Anaïs Berck shares the philosophy of Constant. Constant is a non-profit, artist-run organisation based in Brussels since 1997 and active in-between art, media and technology. Constant develops, investigates and experiments. Constant departs from feminisms, copyleft, Free/Libre + Open Source Software and works on those vectors through an intersectional perspective. More about Constant: https://constantvzw.org/site/-About-Constant-7-.html .
COLLABORATION GUIDELINES
for the residency of Anaïs Berck in September/October 2022
Commitment
Every participant of this residency is committed to the focus of the residency as mentioned in the open call:
During the residency participants develop algoliterary publications. These are publishing experiments with algorithms and literary, scientific and activist datasets about trees and nature. We ask ourselves: who and what is excluded, made invisible or exploited in the existent representations, discourses, tools and practices? How can we restore their presences in histories and storytelling? How can we heal and transform ourselves, our tools, our practices, our relationships to the world, our legacies? How can we help to destabilize the centrifugal force in botany, computation and publishing? How can we make books, databases, algorithms visible as objects of doubts and how to go beyond their established forms?
In case of doubt it will be important to look at the bigger picture: does your proposal for the algoliterary publishing house correspond to stories you want to tell in order to contribute to a healthier relationship with Mother Earth and fight against climate change? And to whom are these stories addressed?
Anaïs Berck is committed to environments where possible futures, complex collectivities and desired technologies can be experimented. The spaces that we initiate are therefore explicitly opposed to sexism, racism, queer antagonism, ableism and other kinds of oppression. Our positioning is one of risk-taking and trial and error in which rigour and critique meet with humour, insecurity, tension, ambiguity and mistakes. Fearless, brave environments empower radical complexity.
Departing from feminisms means for Anaïs Berck to be attentive to the sometimes generative, often oppressive arrangements of power, privilege and difference. We understand these arrangements to be related to gender and always to intersect with issues of for example class, race and ability. Finding ways to come to terms with the long colonial history of computation and botany, the way technology impacts ecology, and the relations between them, deserves our ongoing attention.
Anaïs Berck acknowledges that there are asymmetries and inequalities present in any group of human beings. We acknowledge that we all carry wounds and blind spots with us and that these can lead to tension. Therefore we encourage people to be present with a welcoming, listening and questioning, rather than a judging attitude; being conscious that Anaïs Berck attempts to operate from inclusivity rather than exclusivity. We want our work to take very different human beings and their own universes into account but also to include historical and future other-than-human agents. This means to keep challenging our assumptions and to welcome being challenged about ways we might be able to address the intersections of privilege, power, history, culture, inequality, pain, and injustice.
Every day some other participant will take on the role of the contact person. If we are feeling unsafe or seeing someone who seems in distress, we can immediately find the the contact person. They will do their best to help, to address the issue and/or to find the correct assistance if relevant/necessary. Information will be handled with sensitivity.
The past years have confirmed that governmental laws/regulations/measures have often been out of sync with actual needs… so be ready to re-discuss collectively how to relate to these, we encourage to proactively express discomfort / sense it with the others around you.
Anaïs Berck supports Free Culture because it is a way to acknowledge that culture is a collective effort that deserves to be shared. There is no tabula rasa, no original author; there is a genealogy and a web of references though. When it comes to technology, we think Free Software can make a difference because we are invited to consider, interrogate and discuss the technical details of software and hardware, or when we want to engage with its concepts, politics and histories. Over the last years, we have come to the realisation that being affirmative of Free Culture has to come with more critical considerations. We want to take into account the links of Open Access ideology to colonial extractivism which can obstruct the imagination of complexity and porosity. In addition we want to take into account the rights to opacity in access and transmission of knowledge, especially in regard to marginalized communities. Constant has written a license which tries to address these considerations. We are experimenting with this license and now distribute all Anaïs Berck’s work under the Collective Conditions for (Re)use license: https://constantvzw.org/wefts/cc4r.en.html.
The experiments as well as the code you will develop during this residency will only be published with your consent, mentioning Anaïs Berck - An Algoliterary Publishing House, and all the humans, trees and algorithms that are part of the experiment. At the end of the residency we will decide what happens with the dedicated server and all materials it contains.
Collaboration Guidelines
We wrote a short version of the guidelines. Against the wall you can find a longer version as well. We invite you to read that one as well when you find some time.
Residencies are intensive transdisciplinary situations to which participants from very different backgrounds contribute. Because of the intensity of exchanges and interactions during residencies, there can be moments of disagreement and discomfort. These moments need to be acknowledged and discussed, within your own limits.
Even if some of the below guidelines sound obvious, we have experienced that being together can be complicated. We have written these guidelines to think of ways to be together comfortably and attentively. Furthermore, by addressing the guidelines as part of each residency, we hope to create dynamic ways to keep training our abilities to expand and strengthen braver spaces. The guidelines are meant to create potentiality for all, and sometimes this is done by restricting the space taken by some.
Collaboration Guidelines - Short Version
Collaborators with and within Anaïs Berck take the following into account:
• If you feel you 're judging, leave the room and come back.
• Everything you do from the heart is good .
• If you prefer to do nothing , stay present and sustain the group energy .
• Enjoy the process , don 't be obsessed by doing it ' right '. There is no success or failure. It is the process that counts.
• We 're all learning to stay with the trouble in the complexities of climate change, injustices, paternalism, exploitations, privileges.
• All you need is to feel the willingness to show up and be present .
• Refusing and deconstructing sexism , racism , queer antagonism , ableism , ageism and other kinds of oppression .
• Leaving physical , emotional and conceptual room for other people .
• Respecting other beings , present or not , human or more - than - human .
• Caring for physical and digital environments .
• Avoiding to speak for others .
• Try to not be solely guided by your preconceptions .
• Taking time to actually listen .
• Asking before assuming .
• Welcoming multiple processes of ( un ) learning . The exchange of information , experience and knowledge comes in many forms .
• Accepting differences . Appreciating divergence in pace , points of view , backgrounds , references , needs and limits .
• Recognizing that words and ways of speaking impact people in various ways .
• Caring for language gaps . This is a multi - lingual environment .
• Using Free , Libre and Open Source software whenever possible .
• Asking for explicit consent before sharing photographs or recordings on proprietary social networks .
• The default license for all material and documentation is a Collective Conditions for ( Re ) use license : < https : // constantvzw . org / wefts / cc4r . en . html >
• Knowing that taking all of the above into account is sometimes easier said than done .
• Harassment is unacceptable and will not be tolerated during any Anaïs Berck event , meeting or gathering . See the full guidelines against the wall for what we understand by harassment .
If we run into conflict with one of these guidelines, or when we see that others are flagging our behaviour:
• we do not fuel the conflict.
• we speak with each other.
• we step out of the room and breathe.
• we apologise.
• we come back with a renewed engagement to collaborate.
• if we continue to transgress the guidelines, we will be asked to leave.
About trees
How can we give a voice to trees in our creation and decision processes? What methodologies can we find to somehow activate their presences, consult them, listen to their points of view?
Does it still make sense to print books on paper? And what is the ecological footstep of creating and reading a digital book? Is it possible to organise an equitable book publishing activity in which trees have a say in content and form? Is there a way to publish books in a way that is respectful of trees and nature? What form would this take?
How can we find materials that translate in a metaphorical way the reasoning of the algorithm-author. Does it make sense to work with universal databases on trees and nature? What about a situated way of working, where we create our own databases including all kind of subjective data, like vernacular names, stories, curing recipes, contextual data from sources such as temperature, light, moisture related to specific trees? How do we deal with existing data architectures, colonial herbaria, legislative texts, philosophy, literature? How does the choice of data affect the representation of trees?
Since programming code - the dialogue with algorithms - is often a very draining activity that can generate a lot of stress, it might be interesting to lead experiments to see if and how conscious visits to the forest could influence the writing of code.
Publication forms
The research and activities of Algolit and Anaïs Berck have lead to different publication forms.
They grew somewhat organically, informed by deadlines, budget, materials and inspired by other works.
This list is an invitation to think through what publication forms can be, and what other forms we can think of.
screens
terminal
Text in the terminal can be shown in different colours, indentations can make the text breathe.
The background colour can be adapted. The terminal functions as a plain text screen.
Examples:
- 2019, Grafting trees
Description: http://anaisberck.be/grafting-trees/
- 2017, We are a sentiment thermometer : https://algolit.net/index.php?title=We_Are_A_Sentiment_Thermometer
browser
Using Javascript, node.js, Weasyprint or Paged.js, publication results can easily be shown in a browser. Pdfs can be automatically generated.
Examples:
- 2019, Tf-idf : https://algolit.net/index.php?title=TF-IDF
- 2019, Levenhstein Distance reads Cortázar : http://anaisberck.tabakalera.eus/en
- 2021, Walk along the trees of Madrid : http://paseo-por-arboles.algoliterarypublishing.net/
- 2021, Michel, Cassandra, Google and the others : http://etraces.une-anthologie.be/
e-paper
printers
receipt printer
plotter
objects
robot Zora
Zora is a robot that was developed to assist people in rest homes and hospitals. In Muntpunt, a public library in Brussels, Zora serves during public moments. For the occasion of Public Domain Day, Zora could be part of The Algoliterator, reciting final texts people made the Algoliterator write.
- 2017, The Algoliterator : https://algolit.net/index.php?title=Algoliterator
paper on tables
2017, word-2-vec : word embeddings are a neural network technique in which words of a text pass 13 stages before calculations can begin, this installation visualised the different stages using printed stacks on 13 different tables:
https://algolit.net/index.php?title=Word2vec_basic.py
wiki-to-print
We collectively wrote a catalogue for exhibitions using a Mediawiki. The graphic designer developed scripts that could transform the content of the wiki into a printed object.
- 2019, Dataworkers catalog : https://algolit.net/index.php?title=Data_Workers_Publication
- 2017, Algoliterary Encounters : https://algolit.net/index.php?title=Algoliterary_Encounters
tote bag
2019, The Book of Tomorrow in a Bag of Words : [https://algolit.net/index.php?title=The_Book_of_Tomorrow_in_a_Bag_of_Words(https://algolit.net/index.php?title=The_Book_of_Tomorrow_in_a_Bag_of_Words)]
Algorithms as Authors
How can we consider our geo-political and body-political position when designing, building, researching and theorizing about computing?
How can we embrace the 'decolonial' option: thinking through what it means to design and build the systems that we propose to produce with and for those situated at the periphery of the world system. This involves thinking with trees as entities that have been under cared for and are literally on fire due largely also to the effects of colonialism and subsequently global climate change. How can we respectfully engage with trees when creating a publishing house?
How can we unfold coding practises: understand their structures, the histories and contexts they are embedded in and the radical processes they execute. What if algorithms were committed to a more equal world and a healthier planet? What if algorithms tried to form a symbiosis with trees?
Can we further develop the methodology that was developed during Algolit sessions? Leading to understanding the code of existing models:
trying out scripts
comment scripts
print out each line of code in the terminal
playing with different input and output
adapting scripts to your needs
refining the tool
Three principles prevail:
a) move techniques to another context, for which they were not designed;
b) do not try to optimise techniques, but create interfaces, visualisations, code comments, so that they manage to express themselves in some way;
c) choose the level of understanding, each model being composed like an onion. It is possible, for example, to run a model as it is, to focus on the art of approximation (statistics, algebraic formulas) or to simply make parts of models without using code, to explore and perform them physically or metaphorically (Dataworkers, 2019), because "an algorithm needs to be seen in order to be believed.” (Knuth 1997, page 4).
It is a collective process. No person in the space knows all the answers to the questions that emerge. In this kind of process, insiders can become outsiders, and outsiders can become insiders.
Some examples of commented scripts:
- word2vec:
Commented script, with outputs of different steps as text files: https://gitlab.constantvzw.org/algolit/algolit/-/blob/master/algoliterary_encounter/word2vec/word2vec_basic_algolit_tensorflow-0.12.1.py
Book as form
When algorithms produce a book, they go beyond its established form. Index ‘pages’ can run over a thousand different pdfs, page numbers become a technical artefact for a digital book. Furthermore, computer scripts can generate an infinite amount of text, but given some graphical lay-out elements in the code, they can also generate an unlimited amount of pdfs, of which each one can be different. They are capable of generating so much text that it becomes noise, too big to grasp, impossible to read in a lifetime and potentially useless.
If we consider that the narratives of the algorithms are important to teach us about their functioning, then what they produce matters. What is printed also matters. We ask ourselves what kind of decision making methodologies we can invent to deal with their abundance and make their work legible for human beings.
Contents of the book
Does reading lose its value when you can choose one million copies of a slightly different version of a book?
Do we show the infinity of the generated copies? If yes, how?
Does each book need the capacity to exist in infinite variations? Can we decide upon a 'static' version of a book? Is this interesting?
How can the organisation of books be presented in a fair inclusive way? What 'fair' categories/characteristics can we think of for a first 'index' web page?
Do we keep the plain text as a style throughout all editions of the publishing house, reflecting the materiality of code and logging? Or do we also create 'books' that look like more classic / elaborately laid-out books. A more structured lay-out seems to invite a different way of coding that is written with the lay-out in mind, for example by splitting logs into parts. But also, the code to produce the lay-out becomes part of the code, making the scripts less simple and more clearly designed.
What is the status of an automatically generated pdf? Does it only exist in the RAM memory of the reader’s computer, and it becomes a file on a computer if the reader decides to download it? Is each copy saved on a server? Is it a good idea to respond to the field of literature, where the book is considered as something fixed? Can the immediate download be considered as a distribution strategy?
There are several reasons to stick to the format of the pdf for automatically generated books. Would it otherwise not become something else: a media art/a webpage? Would the focus otherwise be not more on the interactivity and the experience of the visitor? Is creating pdfs a way to speak back with an object that is produced by the publishing industry and tradition? Can we take on the dress and habits of that context: we start as a website and freeze it; what is generated is static; it is a quality that it is a pdf; a way to connect to the website and vice versa?
How can one cite automatically generated books?
Is each generated book a unique object? It can be easily copied and redistributed. Should we talk about unique objects? Create NFTs (non-fungible tokens, a way to integrate artworks into Blockchain, wasteful by design...) as unique objects?
How can we imagine the life of generated books? They can be present in different places: on the website, as a pdf, as a book during the walk, shared in the neighbourhood. They can become topics for a workshop: as tools and to open up algorithms to a wider audience.
Is it an idea to open up this platform as a service? This rises other questions: what about curatorship? A publishing house is assuring quality, it is more than a Print-on-demand service.
What about formats of the pdfs? Now they switch between A4 and US A4, so they're easy for homeprinting. Does it make sense to play with formats?
Can we imagine linking the generation of the pdfs to a POD-platform like Lulu?
Technical aspects of generating books
What tools do we use to automatically generate a pdf? The graphic design of Levenshtein distance reads Cortázar is using html and weasyprint. Weasyprint does not support Javascript. All variations of the book have to be generated by the layer generating the scripts. This doesn’t have to be a problem, but it does offer a limitation. Weasyprint is implemented in Python, but it is also implementing it's own browser/render engine. The development of the library is limited.
Walk along the trees of Madrid is made with paged.js because it supports Javascript. On the other side it needs a browser running on the server (100MB). Another downside of paged.js is that it is running in node, that means an extra layer / technology to maintain.
What is the api of the publishing platform? What is infrastructure of the publishing house? On what server? How to deal with safety? How to make sure it keeps running? How to avoid server overload? Harmful robots? Or multiple users at the same time? Make a waiting row? What is the scale of this project?
What about the ecological impact of the 'infinite' generation of books? How do we calculate that? Include or exclude it?
Publishing practises Varia
Existing algorithmic publishing practises around Varia, Rotterdam
Publishing Experiments
Quilting
Zines
interesting zine made by Michael Murtaugh and Manetta Berends, as a way to share their thoughts and documents their classes: https://hub.xpub.nl/sandbot/SWAAT/00/SWAAT-00.pdf
Jupyter notebook became a part of this publication process, in Markdown, processed using Weasyprint
code becomes a recipe and can be contextualised using the Jupyter notebooks; code becomes textual language, doesn't need to be separated from 'natural' language; you can see the immediate outcome of pieces of code
Nourishing Network
https://a-nourishing-network.radical-openness.org/
https://a-nourishing-network.radical-openness.org/pages/documentation.html
- each week 1 article was written, it was sent in 2-fold in a parcel together with an enveloppe; one could subscribe and would receive a parcel a week, with 1 text to keep and 1 text to send on to someone else
- the text would also be published online each Friday, and be automatically posted from the website to rss-feed/mastodon, a mailinglist could also have been an interesting option
Continuous Publishing: Lumbung
https://roelof.info/lumbung/
- uses feeds/rss to bring together material from different places
- developed for Transmediale 15
- Activity Pub: the protocol behind Fediverse (Mastodon etc), each environment functions on itself, but because they speak the same language, you can bring them together automatically.
Continuous Publishing: Multi feeder
https://multi.vvvvvvaria.org/
- this is a way to publish what happens in and around Varia, using the rss feed of the website and the hashtag on Mastodon
- the idea is to develop a newsletter/gazette this way
- it is a way to show that there are multiple ways to become an author, there are different ways of writing possible
Agents for distribution
Logbot
https://vvvvvvaria.org/logs/
log with examples of plain text culture: https://vvvvvvaria.org/logs/x-y/
- was created during Relearn Rotterdam 2017
- needs the xmmp protocol to chat, when logbot is in the chatroom, it logs all images/messages, is a continuous way to save things
- you create a very thin social layer between internal and public exchange
- logbot generates automatically a webpage
- code of Varia bots: https://git.vvvvvvaria.org/varia/bots
- it is way to publish collectively, in the moment, daily, continuous; it is the strength of algoritmic publishing, something you cannot do with print
- cfr enron mailinglist: time becomes a factor (daily, weekly)
- you can also delete things in the stream
Tools
Jupyter Lab
can run on a server, but also locally
direct & literary: combines Markdown with code, can be a nice way to publish code
works well online, is an occasion to talk about a server (vs computer)
could be a tool for the algoliterary publishing house if multiple makers participate
it could be interesting to look at the differences/potentialities between Jupyeter Lab and git-repository
notebooks can be seen as private interactive websites, you can not work collectively on 1 notebook
binder: you upload notebook online, everyone gets access: https://jupyter.org/binder
Python library for plain text
Resonant publishing (ATNOFS)
this needs more clarification: it is a next edition of the Ethertoff, where editors and graphic designers work together simultaneously
you create a stylesheet together, can adapt the template, and you generate the pdf
html, css, MD
responds to the wish to make pads speak Python, based on your text you can generate new texts
References
On distribution models
https://pad.vvvvvvaria.org/getting-it-out-there
- references on experiments with different distribution models
Bots as digital infrapunctures
https://bots-as-digital-infrapunctures.dataschool.nl/
- by Cristina Cochior
- you need to understand first the social codes of the environment in which you make your bot live, 'bot logic'
- algorithmic agents (bot, worker): something that 'lives' somewhere, it is a way tog et information to people, it shifts to focus from interest to publishing, you need to think about the message, to whom you want to talk, the intention with which you publish, how to avoid to overwhelm people, ...
Stories and histories of plants in Meise
In the framework of the residency in Meise we had the opportunity to talk to different people working in the institution.
We are very grateful for the time and energy of Ann Bogaerts - Head Herbarium, Koen Es - Head Education Department, Henry Engledow – Databasemanager, Sofie De Smedt - Projectleader digitalisation herbarium, Wim Tavernier - Wood expert, Denis Diagre – Manager archives, Sofie Meeus – Datasteward & citizen science, Quentin Groom – Computer scientist biodiversity, Filip Vandelook - Head Seedbank, Marc Reynders, Scientific manager of the living collections.
What follows are a series of stories of plants. The stories relate to fragments of the conversations that caught our attention. Some stories weave in elements from different conversations and all stories are the result of a lot of browsing through the agglomerate of online botanical databases we discovered in Meise.
Doliocarpus major J.F.Gmel. →