Worksession

During a residency of two weeks in deBuren we organise a discovery voyage into the practise of cqrrelating data. Cqrrelation is a typo-enhance notion. You can pronounce it as crummylation, crappylation, queerylation... Cqrrelation refers to the shadow zones of statistics and computing.

During this work session a group of artists hackers, theorists, researchers and number geeks will look into ways of inserting their intuition, senses, bodies and voices as more parameters in the calculations and see where it will lead them.
You’re very welcome to join!

We start from two different points. First, we follow the traces of the potato throughout digital architectures of local, national and international databases and policies. Then, we look at algorithms and methodologies for text analysis and pattern recognition.

Scroll down for more information on content and practicalities.
More info: an *at* constantvzw *.* org

With the support of Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.

Werksessie

Tijdens een residentie van twee weken in deBuren organiseren we een ontdekkingsreis doorheen de praktijk van het cqrreleren van data. Een Cqrrelatie is een notie met een bedoelde typfout. Je kan het uitspreken als crummylatie, crappylatie, queerylatie... naar analogie met de Engelse woorden ’crummy’,’crap’,’queer’. Cqrrelatie verwijst naar de schaduwzone van de statistiek en computing.
Een groep kunstenaars, hackers, theoretici, statistici en onderzoekers gaan aan de slag met data, waarbij ze op zoek gaan naar manieren om hun intuïtie, stem, zintuigen, lichamen en de fysieke context in de algoritmes en tabellen te smokkelen.

Je bent van harte welkom om deel te nemen!
We vertrekken vanuit twee verschillende perspectieven. We volgen het spoor van de aardappel doorheen digitale architecturen van lokale, nationale en internationale databanken en beheerakkoorden enerzijds; en onderzoeken algoritmes en methodes voor tekstanalyse en pattern recognition anderzijds.

Scroll voor meer informatie over inhoud en praktische zaken (in het Engels, dat is ook de voertaal van de workshop). Meer info: an *at* constantvzw *.* org

Met de steun van het Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.

Session de travail

Lors d’une résidence de deux semaines à deBuren, on organise un voyage d’exploration dans la pratique de cqrréler des données. Une Cqrrelation est une notion renforcée par uneerreur de frappe. Vous pouvez le prononcer comme crummylation, crappylation, queerylation... en analogie avec les mots anglais ’crummy’, ’crap’, ’queer’. Cqrrelation est une référence aux zones ombrissantes de la statistique et le computing.

Un groupe d’artistes, hackeurs, theoreticien.nes, statisticien.nes et chercheurs expérimenteront avec des données, en cherchant comment ils peuvent faire résonner leur intuition, leur voix personnelle, leurs sens, leurs corps et leur contexte physique comme d’autres paramètres dans les algoritmes et les tableaux.
Vous êtes cordialement invité à nous rejoindre!

On a prédéfini deux points de départ. Tout d’abord, on suit les traces de la pomme de terre dans les architectures digitales des bases de données et des documents de gouvernance locales, nationales et internationales. Puis, on se penche sur les algorithmes et les méthodologies existantes pour la collection de textes, leur analyse et la reconnaissance de motifs.

En bas de cette page vous trouverez plus d’information sur le contenu et les aspects pratiques (en anglais, puisque ça sera la langue courante de la session de travail). Sachez que les places sont limitées. Plus de renseignements: an *at* constantvzw *.* org

Avec le soutien de la Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.


Werksessie - Constant
Project:

Worksession

During a residency of two weeks in deBuren we organise a discovery voyage into the practise of cqrrelating data. Cqrrelation is a typo-enhance notion. You can pronounce it as crummylation, crappylation, queerylation... Cqrrelation refers to the shadow zones of statistics and computing.

During this work session a group of artists hackers, theorists, researchers and number geeks will look into ways of inserting their intuition, senses, bodies and voices as more parameters in the calculations and see where it will lead them.
You’re very welcome to join!

We start from two different points. First, we follow the traces of the potato throughout digital architectures of local, national and international databases and policies. Then, we look at algorithms and methodologies for text analysis and pattern recognition.

Scroll down for more information on content and practicalities.
More info: an *at* constantvzw *.* org

With the support of Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.

Werksessie

Tijdens een residentie van twee weken in deBuren organiseren we een ontdekkingsreis doorheen de praktijk van het cqrreleren van data. Een Cqrrelatie is een notie met een bedoelde typfout. Je kan het uitspreken als crummylatie, crappylatie, queerylatie... naar analogie met de Engelse woorden ’crummy’,’crap’,’queer’. Cqrrelatie verwijst naar de schaduwzone van de statistiek en computing.
Een groep kunstenaars, hackers, theoretici, statistici en onderzoekers gaan aan de slag met data, waarbij ze op zoek gaan naar manieren om hun intuïtie, stem, zintuigen, lichamen en de fysieke context in de algoritmes en tabellen te smokkelen.

Je bent van harte welkom om deel te nemen!
We vertrekken vanuit twee verschillende perspectieven. We volgen het spoor van de aardappel doorheen digitale architecturen van lokale, nationale en internationale databanken en beheerakkoorden enerzijds; en onderzoeken algoritmes en methodes voor tekstanalyse en pattern recognition anderzijds.

Scroll voor meer informatie over inhoud en praktische zaken (in het Engels, dat is ook de voertaal van de workshop). Meer info: an *at* constantvzw *.* org

Met de steun van het Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.

Session de travail

Lors d’une résidence de deux semaines à deBuren, on organise un voyage d’exploration dans la pratique de cqrréler des données. Une Cqrrelation est une notion renforcée par uneerreur de frappe. Vous pouvez le prononcer comme crummylation, crappylation, queerylation... en analogie avec les mots anglais ’crummy’, ’crap’, ’queer’. Cqrrelation est une référence aux zones ombrissantes de la statistique et le computing.

Un groupe d’artistes, hackeurs, theoreticien.nes, statisticien.nes et chercheurs expérimenteront avec des données, en cherchant comment ils peuvent faire résonner leur intuition, leur voix personnelle, leurs sens, leurs corps et leur contexte physique comme d’autres paramètres dans les algoritmes et les tableaux.
Vous êtes cordialement invité à nous rejoindre!

On a prédéfini deux points de départ. Tout d’abord, on suit les traces de la pomme de terre dans les architectures digitales des bases de données et des documents de gouvernance locales, nationales et internationales. Puis, on se penche sur les algorithmes et les méthodologies existantes pour la collection de textes, leur analyse et la reconnaissance de motifs.

En bas de cette page vous trouverez plus d’information sur le contenu et les aspects pratiques (en anglais, puisque ça sera la langue courante de la session de travail). Sachez que les places sont limitées. Plus de renseignements: an *at* constantvzw *.* org

Avec le soutien de la Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.


Werksessie - Constant
Project:

Worksession

During a residency of two weeks in deBuren we organise a discovery voyage into the practise of cqrrelating data. Cqrrelation is a typo-enhance notion. You can pronounce it as crummylation, crappylation, queerylation... Cqrrelation refers to the shadow zones of statistics and computing.

During this work session a group of artists hackers, theorists, researchers and number geeks will look into ways of inserting their intuition, senses, bodies and voices as more parameters in the calculations and see where it will lead them.
You’re very welcome to join!

We start from two different points. First, we follow the traces of the potato throughout digital architectures of local, national and international databases and policies. Then, we look at algorithms and methodologies for text analysis and pattern recognition.

Scroll down for more information on content and practicalities.
More info: an *at* constantvzw *.* org

With the support of Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.

Werksessie

Tijdens een residentie van twee weken in deBuren organiseren we een ontdekkingsreis doorheen de praktijk van het cqrreleren van data. Een Cqrrelatie is een notie met een bedoelde typfout. Je kan het uitspreken als crummylatie, crappylatie, queerylatie... naar analogie met de Engelse woorden ’crummy’,’crap’,’queer’. Cqrrelatie verwijst naar de schaduwzone van de statistiek en computing.
Een groep kunstenaars, hackers, theoretici, statistici en onderzoekers gaan aan de slag met data, waarbij ze op zoek gaan naar manieren om hun intuïtie, stem, zintuigen, lichamen en de fysieke context in de algoritmes en tabellen te smokkelen.

Je bent van harte welkom om deel te nemen!
We vertrekken vanuit twee verschillende perspectieven. We volgen het spoor van de aardappel doorheen digitale architecturen van lokale, nationale en internationale databanken en beheerakkoorden enerzijds; en onderzoeken algoritmes en methodes voor tekstanalyse en pattern recognition anderzijds.

Scroll voor meer informatie over inhoud en praktische zaken (in het Engels, dat is ook de voertaal van de workshop). Meer info: an *at* constantvzw *.* org

Met de steun van het Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.

Session de travail

Lors d’une résidence de deux semaines à deBuren, on organise un voyage d’exploration dans la pratique de cqrréler des données. Une Cqrrelation est une notion renforcée par uneerreur de frappe. Vous pouvez le prononcer comme crummylation, crappylation, queerylation... en analogie avec les mots anglais ’crummy’, ’crap’, ’queer’. Cqrrelation est une référence aux zones ombrissantes de la statistique et le computing.

Un groupe d’artistes, hackeurs, theoreticien.nes, statisticien.nes et chercheurs expérimenteront avec des données, en cherchant comment ils peuvent faire résonner leur intuition, leur voix personnelle, leurs sens, leurs corps et leur contexte physique comme d’autres paramètres dans les algoritmes et les tableaux.
Vous êtes cordialement invité à nous rejoindre!

On a prédéfini deux points de départ. Tout d’abord, on suit les traces de la pomme de terre dans les architectures digitales des bases de données et des documents de gouvernance locales, nationales et internationales. Puis, on se penche sur les algorithmes et les méthodologies existantes pour la collection de textes, leur analyse et la reconnaissance de motifs.

En bas de cette page vous trouverez plus d’information sur le contenu et les aspects pratiques (en anglais, puisque ça sera la langue courante de la session de travail). Sachez que les places sont limitées. Plus de renseignements: an *at* constantvzw *.* org

Avec le soutien de la Vlaams-Nederlands Huis deBuren


**The framework**

Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict.

Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation... These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers.

A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary.

A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people.

A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating.

A cqrrelator breaks the statistician’s oath by committing the sin of becoming emotional with data. And this generates even more data.

**Week 1: Data portraits of a potato**

The fresh potato we buy at the market will most probably have left very different traces in our digital data-architectures than the frozen sliced potato that is used for the famous ’Mitraillette’ in the closest snack bar. Also the sweet potato, the potato flour or the machines that are used to harvest the potato, and oh yes, the people who cultivate and distribute them, all of them are represented with labels and numbers in databases about export and import in the European Union, in Belgium, in the Region of Brussels and who knows, also in the town of Brussels? Big treaties are being negotiated based on these numbers, like f.e. the TTIP, The Transatlantic Trade and Investment Partnership between Europe and the US.

In collaboration with people from the neighborhood, but also with an expert like Karin Ulmer, who has been following the European and international food policies for more than ten years, as a lobbyist for an NGO that defends the interests of local farmers from third world countries in the EU. We will create playful but critical portraits of the potato in all its data connections and disconnections.

An exercise in curiosity.

At the origin of the idea of the Portrait of a potato work session is a conversation with Karin Ulmer. She was explaining her frustration with the growth-by-trade paradigm. She tries to construct counter arguments to the dominant discourse that prevails in European law-making but each time it seemed that this discourse is mainly determined by data. Would it be possible to create different perspectives by de-correlating and re-correlating these data? By creating unexpected interventions and connections?

Presented with the impressive datasets like the European Commission Exporthelp or the Worldseed databases, we feel the need to take a certain distance, to find our scale. We are not experts and our knowledge about these subjects is limited. We needed the "right" object to start with. In the conversation about the flows of capital and the maneuvers of the agricultural industry, a very simple object appeared as an example: the potato. The humble potato seemed a very productive starting point for many reasons:

 This very unspectacular tuber won’t outshine its relationships.
 It has a strong local identity and travels a lot. And in interesting company.
 It is an object everyone encounters in daily life (at least in Belgium), it doesn’t belong only to the realm of statistics. We can go from the dataset to the local shop and back again.
 The correlations that one can develop starting from the potato are not obvious, it requires work and imagination.
 We appreciate the shape of its roots naturally inclined to create subterranean networks.
 It has its own geo-politics, Belgium has exported 247.195"¬ € in potatoes to the Syrian Arab Republic.

How do we want to work? Some suggestions

 smuggle unusual parameters into the datasets linked to the potato (export/import, seeds, health, food), and look for less obviously related datasets.
 trace the legal status of the potato and see which new relations can emerge.
 talk and interview the people who produce and sell it. And question back our datasets from there.
 investigate its chemical composition and its "edible" properties. If we see it as an element to produce alcohol or as the French fry on the McDonald menu, we can attach to it very different relations. Seen as one or the other, we find it entangled in a very different web of connections.
 compose a corpus of texts that will serve as a base for textual analysis during the second week.
 create a neural network of potato images.
 assemble a cartography of the potato. Trace its journeys, the zones where it is produced, stored, consumed, thrown away.
 taste the potato in its different states. Organize a gastronomic session.
 let’s watch together the potato-scene in Jeanne Dielman of Chantal Ackerman
 stretch our capacity of bearing boredom
 ...

**Week 2: The nearest neighbours inherit all qualities from their gold1000 parent**

Computational linguistics is the interdisciplinary field that deals with the analysis and modeling of natural language from the point of view of computation.

While its origins lay in cold-war urges and still have much to do with the cybernetic dreams of an artificial intelligence, nowadays the field has many ramifications and many practical applications. Great progresses have been made in the fields of linguistics and of language acquisition thanks to this discipline, but its outcomes are also increasingly fulfilling necessities of surveillance, economy and governance. These more mundane and materialistic sides deal in general with the translation of a multiplicity of written text into comparable, processable data, and the profiling of its authors.

Nevertheless, if we consider the historical ancestors of this discipline, where the wonder of logic and the magic of written language meet, this intertwining of knowledge and power is not new.
If we could draw an imaginary line from the mathe-magical methods of the Kabbalah, through Ramon Llull’s rhetorical combinatorics, it would probably arrive to computational linguistics.
In the periphery of this line, though, there are as well many extraordinary examples of art, music, poetry and prose, attracted by the very same magic.

Starting somewhere near...

The computer-linguists of the research center CLiPS at the University of Antwerp are working on different text analysis tools and datasets to address large corpora of natural language. Between the projects they are busy with, there is a research on the recognition of gender- and age-based author profiles and the automatic detection of lies in written text, but they also release open-source libraries and tools for web mining and language-processing.

We think that these tools and methodologies merit to have a look at them. This week could be a lovely occasion for that, so how can we crummify them? And what are the creative elements latent in those technologies? For example, if the average author, built out of thousands of different authors, would write, what would the text look like? Is it possible to become a perfect liar by studying the lying-detection algorithms?

This week we want to look closely at and experiment with the non-pragmatic potential of tools that were designed as economic and surveillance means. We will scry through the magic of code and language and try to find something else than quantification and determination.

How do we want to work? Some suggestions of what we might be busy with this week:

 Look closely at a certain set of tools and the papers that explain them, eg. MBSP or Pattern for Python from CLiPS, to get a general grasp of the functioning of the actual tools as well as the theory (and ideology) that is supporting them.
 Mess around with the existing software, exploring new possible applications outside of the subset of the suggested policing/quantifying ones, re-opening the black-box of the tools, bringing them to some unusual or absurd situations.
 Explode the infinite poetic possibilities that are into to these tools, and in the timeless art of combinatorics
 Apply text processing techniques on the written material we gathered during the previous week, or to the large corpora of texts from authors in the public domain.
 Research ways of writing and communicating that escape systematic profiling and quantification, by reverse engineering the tools, hiding in noise, blending with the norm, or more surrealist techniques.

**Practical**

The idea is to work ten days, each week on a different topic. Ideally we will form smaller project groups and inform each other daily about what happens in each group, so the different questions and discoveries can circulate. It is possible to join for one week only or for both, to only join one group or to participate in different groups.
There will be slots reserved for specific presentations, if a group or a person wants to take a moment to explain in more details a certain aspect of her work, a concept or a project that is relevant for the investigation. All participants are invited to participate to Cqrrelations Loop, a public evening at Recyclart on 15th January, with brief presentations of the work in progress.
Meal will be served at lunch time so we have the pleasure to relax together.