See also: Corset, Skeleton, Skin.
The exhilarating sensation of immediate access. Fast, decentralised and widespread: the web is where the breathtaking versatility of information can be experienced. Microformats, datamining, search engines and RSSfeeds have liberated data from the obligation to appear at predesignated places. Mixed and meshed up in unexpected ways, new insights are served up each time you reload a page.
“To lead the World Wide Web to its full potential” is the irresistible slogan carried by the World Wide Web Consortium, an international standards body set up by Tim Berners Lee in 1994. The consortium is responsible for developing wellknown protocols such as HTML, CSS, XML and many other guidelines that help streamline the way data circulates online. All standards produced by the W3C are open standards, meaning that their specifications are without exception released under nonproprietary, royalty free licenses, making sure that neither companies nor individuals can appropriate them for their exclusive benefit.  Webstandards are an evolving set of agreements about how digital information should be structured and organised into compatible units that are ready to be combined, compared and interchanged. It all starts from the principle that the more information is like data, the more flexible it behaves.
But the untamed reality of the World Wide Web is difficult to regulate. At a conference about the history of webdesign, Steven Pemberton, chair of the ’HTML and Forms Working Groups’, expressed his concern about the status quo. "Looking at typical web pages now it is a real mess, and it is very hard to extract the true information from a page. I’m not saying visual is unimportant. It is important, of course, but it’s subordinate to meaning and it’s important not to mix the two" . Pemberton referred to the working premise of the W3C that the separation of ’content’ from presentation facilitates the exchange of information, of ’true information’ or ’meaning’.
Whether you agree with the routine division Pemberton proposes between ’the visual’ and ’true information’, depends on your definition of design. If you think that design is more than packaging, then the way a text is visually organised, a hierarchy laid out and an image placed, matters for what something means . In the case of design for print this results in permanent, though not necessarily exclusive amalgamations of data and form; a web standards approach essentially breaks the familiar bond beyond repair.
The split happens first of all on the level of precision and quality control which you could consider the result of a lack of standardization. Around 1996, browser vendors were fighting desperately for their share of the on line market. To bind users to their products, both Netscape and Microsoft shipped self-invented mark up  with each update of their software and they sometimes supported, and sometimes completely ignored their competitors’ tags. The interpretation of the webpage you view can therefore differ tremendously depending on the browser you choose. This causes problems for content providers, designers as much as for users of their content. As there is no way to know what software your audience will use to browse, it is impossible to predict what they will actually see on their screens. Vice versa, a user never knows whether she is viewing a page in the way it was originally intended.
Frustrated by a loss of control,web designers started to pressure for standardisation themselves. It was literally WaSP , a designers’ grass roots organisation, who demanded that the carefully chosen term ’recommendation’ initially proposed by the W3C, should be changed to a more authoritarian ’webstandards’. “Most of the Web remains a Balkanized mess of non–valid markup, unstructured documents yoked to outdated presentational hacks, and incompatible code fragments that leave many millions of web users frustrated and disenfranchised."  WaSP was surprisingly successful in creating awareness amongst their own community. Through promoting the socalled ’Acid test ’, browser companies like Safari and Opera were seduced into competing for compliancy rather than for making a difference. Even now the standards approach has become standard practice in most design agencies and bureaus worldwide, ’browser compatibility’ is still far from achieved. Microsoft’s recent introduction of Silverlight for example, a browser plugin that offers native video support and other kinds of functionalities that are currently lacking from (X)HTML and CSS, and the glacial pace  new features are implemented, means that designers are still wasting time on browser workarounds and even gradually start to migrate back to using nonstandard, proprietary but controllable web technologies.
If browser wars were about a clash between different views on data, the fight could actually be interesting. Besides a few early and interesting art projects, it is not different browsers that stimulate multiple points of view, but the fact that viewers can decide themselves to view the same page in radically different ways. The second level design work breaks apart, is therefore more significant, exciting and harder to come to terms with. Avoiding ’tagsoup’ benefits users who for whatever reason, have trouble reading information from screen. When a web page is marked up according to webstandards, data can be more reliably extracted from HTML and subsequently more efficiently processed by differently abled machines and humans alike . Search engines handle those pages faster than others; an RSSfeed can be stripped of styling before travelling to other places; the same website displayed on a mobile phones requires a radically different layout to simply fit the minimal screen.
Most importantly, a standard webpage can be more easily read out by special screenreader software and allows any user to enlarge text, colorize backgrounds or make changes to the display of a page in whatever way they need. As a result, screen disabled users can potentially have access to the same information at the same speed and quality as anyone else, instead of having to wait until the Braille
version makes it to the local library for the blind (if it makes it there at all). Disability rights groups successfully pressured governments and NGOs for ’accessible websites’ and at this moment it is the law in many countries that online resources of public institutions are made available in compliant formats.
The ability of data to travel freely across contexts and places, changes the way we deal with information, and this obviously means a paradigm shift for how design is done. To work with this rather than against it, means to take webstandards to heart and to accept that design takes distance from data, from it’s ability to respond to specific materials, and assumptions about use.
Webstandards fall back on the original characteristics of HTML (Hyper Text Markup Language), a textual layer that was conceived as a lingua franca, a universal gobetween to glue together different objects in similar ways. Although first versions contained style markup such as (bold), HTML was designed for semantic markup, not for presentational use. Once the amount of web pages started to explode, visual appearance became gradually more important but all webdesigners avant-la-lettre had available, was structural HTML supplemented with proprietary tags. Worried by the mixing of markups, The W3C decided to develop a new language for styling, so that everything visual could be stored into separate, though interdependent files, Cascading Style Sheets. Style elements such as
<i></i> (italic) are replaced by their semantic equivalent
<em></em> (emphasis) and even when most browsers continue to display text marked up as
<em> in italics, compliant tags refer to semantic values, and not to typographic styles anymore. This separation forms the core principle of all webstandards. It makes it technically possible to change one element without having to touch another, making it easy, for example, to alternate between different looks of a page without ever touching the original HTML code.
But what are we supposed to separate exactly? From the meticulous documentation of the discussions that led to the development of CSS, it seems that not much time has been spent on discussing the choice of the word pairs ’substance’ versus ’form’, their later equivalent ’content’ and ’style’, or even more outrageous, ’meaning’ and ’presentation’. The ease with which the various working groups are able to put such porous concepts to use as binary oppositions, is not surprising coming out of the bureaucratic culture of the W3C. Used in all possible combinations, we find: substance, meaning, content, structure, text and data gathered on one side and: form, presentation, style, visual data, layout and design on the other. Similar but certainly not synonymous, terms in the first set refer to anything that can be expressed in lightweight ASCII characters whereas the second set deals with everything else. A text’s structure (which parts are selected and how are they broken up in paragraphs, headers and sections) inevitably ends up in the first pile, but so does punctuation, tone, grammar and language. Essential information bits that are technically more complicated to handle such as images, typography, colour, contrast, placement, layering and sound end up in the ’other’ pile.
"Our Members work together to design and standardize Web technologies that build on its universality, giving the power to communicate, exchange information, and to write effective, dynamic applications―for anyone, anywhere, anytime, using any device” . Aiming at broad access rather than at specific communication, ’content’ needs to be treated as formless matter because only once ’it is liberated from its specific instantiation, can it travel to other devices or media, even to ones that have not been invented yet. In such an approach the universal validity of the system itself needs to be considered neutral because for ’content’ to cross continents and cultures, contextual patina weighs too much. Here lies the basic problem in the way webstandards work: such a system can only function if we assume that the same ’content’ presented in a different way, communicates essentially the same message. Does the fact that the web is potentially everywhere, mean that everyone reads the same things?
The vast and relatively uncharted terrain of dynamic design is more than most professionals can individually handle; typical web projects are either done by designers who create Photoshop mockups that are then sent off to programmers or by programmers who create systems to which designers add beautiful surfaces. The handbook for Smarty, a PHPbased templating engine, warns not to confuse programming with design work. “Most often, designers are also programmers to some extent and vice versa. While designing you must totally forget that you are also a programmer and look at things only from a designer’s perspective. If you mix up your identities, there will be a great risk of nonstandard designs”  The practice of webdesign starts to replicate the way Tim Berners Lee and other engineers imagined it. Confining programmers and designers to different files, it often means they end up in different physical spaces, too.
“Any project which offers a limitless variety of ways in which information can be presented and structured can happen both faster and better if there is a way for designers and engineers to collaborate without requiring either group to fundamentally alter their way of looking at the world. (...) The Web exploded in the space of a few years in part because it effected this separation of functions so decisively” 
A clear division of labour facilitates the kinds of collaboration that do not concern the object of collaboration itself. But it assumes that the architecture of a data structure itself is without form; that the work of a programmer has no influence on the way a page will lookandfeel. The separation is especially hard to maintain, now that most web pages are produced with the help of web applications such as Content Management Systems. Design decisions need to be made on the level of layout, software and database design simultaneously. Rather than just styling or coding, designers and programmers manipulate information systems and for this they need to collaborate.
Most webapplications work with a set of templates, containing code used by the server to generate HTMLoutput from a database. They are in fact design engines, using computer scripting to automate either all or part of the process of selecting, layingout, placing, combining and aligning. SPIP , a free software CMS, uses the French term ’squelette’ (’skeleton’ or ’backbone’) for this type of file, which is an interesting reading of what they do, defining the structure of a website at the same time and place as its presentation. Written in dialects of PHP or Python, some are custom to the structure of a specific CMS and others are designed for more generalized use. Template files act as a point of contact between designers and programmers; they are negotiable spaces where decisions about data access, presentation and data manipulation are brought together.
It is exactly the mixing of identities which could bring about new scenarios for use. Paraphrasing Clay Shirkey, “to collaborate requires either group to fundamentally alter their way of looking at the world”. It means imagining yourself in another place and that might save you from the all to tight corset of conventional usage, reinscribed by conventional forms. It could shift the work of design away from ’setting restraints’ to ’making possible’, from setting borders to the creation of flexible backbones.
It is obviously convenient to organise web development according to a socalled ’three tier architecture’, confining the work of programmers and the flexibility and speed of work gained from a clear divisions of tasks is not to be underestimated. But if design is more than packaging, than work needs to be shared rather than divided to allow for precise connections between presentation and content.
If you think about webdesign as the work of articulation, of making temporary alliances which somehow have the potential to bring perspective to the data presented, then design has to go deeper than skin. It means to engage in the untidy interdisciplinary practice of rendering visible relations between database architecture, filtering and structure of data itself. Each time data appears in a new context is a rewriting, restaging, reinterpretation. For that to happen in meaningful ways, code, content, behaviour and presentation need to mix and mingle.
“That is what articulation does; says Donna Harraway. “it is always a noninnocent, contestable practice; the partners are never set once and for all. There is no ventriloquism here. Articulation is work, and it may fail.” 
 The work of the W3C itself is financed through contributions of member organisations; software vendors, universities and telecom companies pay up to 65.000 dollars (41,600 euros) per year out of interest in codefining the way the web can be made (inter)operable. http://www.w3.org/Consortium/
 From a presentation by Steven Pemberton at A Decade of Webdesign, conference organised by the Piet Zwart Institute and Institute for Network Cultures, January 2005. Transcript: http://www.decadeofwebdesign.org/transcribes.html#pemb
 Literary scientist Katherine Hayles argues for taking the ’rendering concrete’ of digital text into account. She calls for
’mediaspecificity’, “ensuring that discussions about the text’s ’meaning’ will also take into account its physical specificity as
well". Hayles, Kathernine N. Print Is Flat, Code Is Deep: The Importance of MediaSpecific Analysis. in: Poetics Today.
Volume 25, Number 1. Duke University Press, Spring 2004
<marquee></marquee> are wellknown proprietary tags everyone loves to hate.
 Jeffrey Zeldman, standards evangelist and author of Designing With Webstandards, writes: “Certainly the W3C moves at a glacial pace. It’s why we write float when we mean column. But a glacial pace isn’t all bad, especially if you’re driving off a cliff (which I gather we are). Driving off a cliff at a glacial pace affords you the luxury to turn around. I loves me some glacial pace.” Zeldman, Jeffrey. What Crisis? [weblog entry]. The Daily Report. August 2007. (http://www.zeldman.com/2007/08/15/whatcrisis/). June 2008.
 Separating ’content’ from presentation is only the most basic level of work needed for a webpage to be considered
’accessible’. For concise and convincing arguments why designers should care, see: Clark. Joe, Building Accesssible Websites. New Riders, 2002. Available on line: http://joeclark.org/book/sashay/serialization/
 Hasin, Hayder. Smarty PHP Template Programming And Applications. Packt Publishing, 2006
 Harraway, Donna. The Promise of Monsters. in: Lawrence Grossberg, Cary Nelson, Paula A. Treichler, eds., Cultural Studies. Routledge, 1992
Download document: http://snelting.domainepublic.net/texts/divide_share.pdf
Constant Verlag is a repository of texts from the depth of the Constant Archives. Some of those texts were already available on line, others just saved on one of our harddrives; some written in French, others in English or Dutch; recent or as early as 1997. As most texts have been published under open content licenses, you are invited to use, copy, modify and redistribute the material.