Exploring Digital Humanities Practices
Exploring Digital Humanities Practices
Digital Humanities
Digital humanities born of the encounter between traditional humanities and computational methods, are
new modes of learning and institutional units for collaborative, trans-disciplinary, and institutional
computationally engaged research and it’s also a convergent practice that explore a universe in which print
is no longer the primary medium of knowledge.
1. Modern humanities: literature, philosophy, classics, rhetoric, history, studies of art, music, and
design
2. Digital Humanities: melding of humanities inquiry with digital technologies, it requires new forms
of design for trans-media modes of argumentation as a model for contemporary work.
3. Knowledge Design: to design new structures of argumentation, not yet codified, it could be inspired
by: (Jeffrey Schnapp)
- Communication/Graphic/Visual Design
- Interaction /User Experience Design
- Media Design
Digital is a traditional label pasted on the humanities, the migration from traditional media to digital ones is
a process analogous to the flowering of the print culture from Renaissance onwards, it is concerned also
with collaboration, the dialogue is not only across disciplines but across pure/applied,
qualitative/quantitative, theoretical/practical divides.
These intertwining of scholarly method, computational capacity, and new modes of knowledge formation
(enabled by networked, digital environments) combine to make possible what we term the Generative
Humanities: a mode of practice that depends on rapid cycles of prototyping and testing, a willingness to
embrace productive failure, and the realization that any “solutions” generated within the Digital
Humanities will spawn new “problems.”
Some dates:
But why are all these argumentations important for the art management? From different point of view:
- Technical: familiarity with data types and file formats, database knowledge, platforms for content
management and authoring, interface design, game engines, etc.
- Intellectual: cross-cultural communication, generative imagination, iterative and lateral thinking,
ability to think critically with digital methods, a global, trans-historical and trans-media approach to
knowledge and meaning making
- Administrative: intellectual property, institutional circumstances, sustainability / funding /
preservation
The emphasis on design depends on robust technological environments to manifest across media, so we
discuss how the basics of computation and processing affect the design and implementation of Digital
Humanities projects. These projects engage with any number of different methodologies and approaches,
but here we concentrate on four: curation, analysis, editing, and modeling as central to contemporary
humanistic inquiry.
These intertwining of scholarly method, computational capacity, and new modes of knowledge formation
combine to make possible what we term the “GENERATIVE HUMANITIES,” a mode of practice that depends
on rapid cycles of prototyping and testing, a willingness to embrace productive failure, and the realization
that any “solutions” generated within the Digital Humanities will spawn new “problems”— and that this is
all to the good.
Important ids the transition from humanism to the humanities, tracing its roots from the medieval trivium
and quadrivium to the Renaissance. The role of the Renaissance is fundamental, in shifting from a church-
dominated to a human-centered worldview, with a focus on human subjects.
The gradual transformation into modern humanities was influenced by editorial practices in recovering
classical works, the establishment of universities, and the impact of the printing press.
But to make the argument for why the humanities remain more necessary than ever, we must go beyond
mere bromides celebrating the inherent value of cultural tradition or the inherent value of a familiarity with
certain achievements from the cultural-historical past. No matter how imperiled by vocationalism, cost-
cutting administrators, or the self-inflicted wounds of internecine battles, the humanities must survive
because they embody distinctive modes of producing knowledge and distinctive models of knowledge
itself.
We refuse to take the default position that the humanities are in “crisis,” in part because this very rhetoric
of crisis has persisted for well over a century, however many mutations it has undergone. Jeremiads
regarding the decline of educational standards, the failure of students and faculty alike to adequately
embrace humanistic ideals, and the demise of tradition may well be inherent to the process of education
itself.
Digital Humanities adopts a different view: It envisages the present era as one of exceptional promise for
the renewal of humanistic scholarship and sets out to demonstrate the contributions of contemporary
humanities scholarship to new modes of knowledge formation enabled by networked, digital
environments.
Starting from the late 1940s, early projects inspired archival endeavors at Oxford in the 1970s. The
humanities embraced the digital to extend traditional scholarship toolkits, focusing on corpus building, text
encoding standards, and database creation.
The first wave of Digital Humanities in the late 20th century emphasized structured humanities data to
interact effectively with computation. Database tools laid the foundation for projects worldwide, sharing
common features such as textual analysis, linguistic study, and a focus on pedagogical support (Perseus
project, women writers projects, the valley of shadows).
The transition to the Web in the early 1990s accelerated digital scholarship's shift from processing to
networking. Scholars grappled with multimedia expressions and innovative methods, moving beyond text-
based models. In the late 1990s, projects emerged creating visualizations, geospatial representations,
simulated spaces, and network analyses. Challenges and opportunities persist in integrating technological
underpinnings with humanistic methods and values. Overall, the narrative highlights the ongoing evolution
of humanities in the digital era.
The intersection of printed books, humanistic scholarship, and the evolving landscape of digital
communication are important for the digital humanities because it highlights the historical consistency in
humanists' use of formats like the printed page and bound codex.
The transition to digital environments necessitates new forms, tools, and perspectives, emphasizing the role
of design. In the 21st century, communication is diverse, extending beyond linear text to scalable databases,
visualizations, video lectures, and multiuser virtual platforms.
The text explores the screen culture's significance in Digital Humanities, emphasizing the complementary
and sometimes tension-filled relationship between language and visuals. The suite of expressive forms
now includes sound, motion graphics, animation, and code remixing. Engaging with design fields, especially
communication and interaction design, becomes crucial for those involved in Digital Humanities work.
Like their print predecessors, format conventions in screen environments can become naturalized all too
quickly, with the result that the thinking that informed their design goes unperceived. Though there is no
“natural” way to interweave text, images, sound and moving images, there exists a range of available genre
models from experiments unique to the digital realm to ones that draw upon prior moments in the history
of print and cinematic conventions.
The broad spectrum of design in the context of digital humanities, emphasizing its role in shaping
arguments and methodologies. It explores how design encompasses various activities, from everyday
tasks to specialized areas like critical design and human-computer interaction. The collaboration between
humanists, designers, and technologists is highlighted, focusing on the creative practice of design within
digital humanities.
Design is not just a technique but also an intellectual method when used to frame questions about
knowledge. It delves into the historical and contemporary intersections of design and the humanities,
drawing parallels with influential 20th-century experiments combining visuals and text. The dynamic nature
of digital media, with its fluidity and remixing capabilities, is contrasted with traditional stabilizing practices
like writing and printing.
The evolving landscape of digital humanities is discussed, acknowledging the emergence of polymaths
capable of diverse skills but also recognizing the importance of collaboration. The need for recognizing
intellectual contributions in team-based digital humanities projects is highlighted, challenging traditional
concepts of authorship. The text concludes by cautioning against potential negative consequences of
demanding expanded skill sets without commensurate compensation and the need for adapting reward
structures in the changing research environment.
We can watch the structure and development of Digital Humanities projects across various levels. The first
one is the level of basic computation, which involves programming, processing, and protocols. The text
highlights the inherent contrast between the ambiguous and interpretative nature of humanistic disciplines
and the need for disambiguation in computational processes. Despite this disparity, humanists have
adapted to computational methods, and conversely, computational methods have been influenced by
humanistic approaches.
The second level involves processing, exploring how computational capacities are utilized in tasks such as
stylometric, concordance development, and indexing. The introduction of structured data and markup
languages allows for interpretation within the digitized context. The enduring aspects of computational
foundations and processing activities are complemented by the development of platforms, tools, and
infrastructures supporting curation, analysis, editing, and modeling in Digital Humanities projects.
The text delves into the cultural, institutional, and technical aspects influencing curation, analysis, editing,
and modeling. It emphasizes the importance of humanists in shaping the knowledge and design of digital
projects, preventing the loss of essential elements in scholarship and pedagogy.
The passage then discusses prototyping and versioning in Digital Humanities, highlighting the field's
capacity for rapid project creation, testing, and reworking. It emphasizes the need to balance normalization
with continued experimentation, risk-taking, and a willingness to accept failure as part of the iterative
process.
The concept of Generative Humanities is introduced, emphasizing the transformative potential of digital
tools and platforms in shaping a dynamic and experimental core for education. The text argues for a core
curriculum in the Digital Humanities that fosters creativity, critical thinking, and adaptability, preparing
students for the challenges of the 21st century. It discusses the need for assessment standards to
accommodate new forms of pedagogical exercises, including game design, multi-player narratives, and
online exhibits.
The passage reflects on the evolving nature of pedagogy, influenced by ubiquitous networks and the
outsourcing of memory in the digital age. It advocates for a generative humanities core that goes beyond
vocational training, addressing the social, political, and ecological challenges of the modern era. The text
concludes by highlighting the potential of Digital Humanities to expand the reach and impact of technology,
fostering a more inclusive and globally relevant form of scholarship that engages with the broader public.
Studio Azzurro
The Database as System and cultural form: anatomies of cultural narratives
Database Aesthetic is, in digital art field, used to describe the aesthetic principles applied in imposing the
logic of the database to any type of information, filtering data collection, and visualizing data. So it often
become a conceptual potential and cultural form, a way of revealing (visual) patterns of knowledge, beliefs,
and social behavior. Also, is restricted sense, it refers to the aesthetic of the database itself.
The difference between old and new databases is that the new ones permit us to retrieval and filter the
data in multiple ways. Databases are different by the way they store the data:
- Hierarchical Databases, which arrange the data in hierarchies like a tree structure with parent/child
relationship.
- Network Databases, which are still close to the hierarchical model but use “sets” to establish a
hierarchy that allows children to have more than one parent and thus established many-to-many
relationships.
- Relational Database, the most common form, are based on the research of Dr. E. F. Codd at IMB in
the late 1960s and rely on the concept of tables (the relations) that store all data. In this kind of
database, we don’t have to know the information are stocked inside it because we have different
tables that could be identified with unique name that could be called and asked inside the
database.
- Client/Server Database, which come in various forms and allow multiple clients to, remotely and
simultaneously, access and retrieve information from a database server around the clock.
- Object-Oriented Database, which are designed to work well with object-oriented programming
such as Java and C++.
A database is a system that comprise the hardware that stores the data: the software that allows for
housing the data in its respective container and for retrieving, filtering, and changing it, as well as the users
who add a further level in understanding the data as information.
Digital art needs mathematics and algorithms to born. The digital medium is not by nature visual but
always consists of a “back end” of algorithms and data sets that remain hidden and a visible “front end”
that is experienced by the viewer/user.
As Lev Manovich puts it, any new media object consists of one or more interfaces to a database of
multimedia material, even if it does not explicitly point to its roots in a database.
The database aesthetics suggest the possibilities of tracking process, individual, cultural, communicative, in
its various forms. The understanding of a database as the underlying principle and structure of any new
media object delineates a broad field that includes anything from a network such as the Internet (as one
gigantic database) to a particular data set.
WebStalker and netomat™ have questioned about the conventions of exploring the web through browser
that are predesigned such as a book.
IOD has created the “medium” if alternative browsers by expanding the functionality of the existing
browses in an aesthetic and creative form that revealed the Internets “database architecture”. In his essay
“Visceral Facades: Taking Matta-Clark’s Crowbar to Software,”5 IOD’s Matthew Fuller establishes a
connection between the WebStalker’s approach to information architecture and American artist Gor- don
Matta-Clark’s technique of literally “splitting” the existing architecture of buildings, an application of
formal procedures that would result in a revelation of structural properties.
Netomat™ reveals how the ever-expanding network interprets and reinterprets cultural concepts and
themes and takes visitors for a ride into the Internet’s “subconscious.” The visualization and “dynamic
mapping” of real-time data streams has become a broad area of inquiry in digital art, and quite often
several projects visualize a similar data set in distinctly different ways. it ultimately always is a visualization
of data, be it a “closed” database with a preconfigured, limited number of materials or an “open” one that
organizes real-time data flux.
Lev Manovich said that database and narrative are natural enemies. Competing for the same territory of
human culture, each claims an exclusive right to make meaning out of the world. According to Manovich,
the database presents the world as a list of items that it refuses to order (which is certainly true on the
level of the data container), while a narrative requires a protagonist and narrator, a text and story, and
cause-and-effect relationships for seemingly unordered events.
THIS DOESN’T MEAN THAT DATABASE AND NARRATIVE ARE MUTUALLY EXCLUSIVE FORMS.
Art projects frequently apply the principles and logic of the database to existing, often originally analog
information (ranging from a book to movies, television series, and postcards) to reveal relationships that
remain unseen in the original format. W. Bradford Paley’s TextArc (2002), for example, treats the book as a
database (data container) and arranges it in its smallest units, words, and lines that can be filtered
according to various principles.
Part of TextArc’s beauty derives from the fact that the project creates a new form of data container and
spatial model for the book on the visual front end. The representation of a novel’s entire contents and its
structural elements on a single page constitutes a radical break with the book’s traditional spatial model
and a shift in focus. The narrative itself moves to the background, while the patterns of its construction
become a focus of attention. What the project illuminates are structural patterns and symmetries that
presumably are not obvious during the reading (and writing) process.
A very different look at the construction of narrative, in this case visual or cultural, is provided by the works
of Jennifer and Kevin McCoy, who experiment with a form of enhanced cinema that focuses on the
construction of single shots and the messages they convey.
Thay have done different works such as Every Shot Every Episode (2001), How I learned (2002), 201: A Space
Algorithm (2001):
- Every Shot Every Episode (2001) is a deconstruction of some episode of Starky and Hutch as in a
database of single units (every zoom out and every zoom in). What Every Shot Every Episode
creates, however, is a record of the elemental aesthetics of familiar genres, the subtexts of
stereotypes, and formulaic representation that the viewer otherwise would not necessarily perceive
in this clarity.
- How I learned (2002) exposes the cultural conditioning of learned behavior by structuring the
Eastern-Western television series Kung Fu in categories such as “how I learned about blocking
punches,” “how I learned about exploiting workers,” or “how I learned to love the land.”
- 201: A Space Algorithm (2001) an online software program that allows viewers to reedit Stanley
Kubrick’s science-fiction film 2001: A Space Odyssey by selecting individual shots and compressing
or expanding viewing time, in this way they will be able not only to change the spatial components
but also the temporal construction. As the result of this many questions comes up in the time-and-
space cinema paradigm.
A broader look at cultural values and representation unfolds in the projects of George
Legrady, who has consistently explored the archive and database as cultural record. His
project Pockets Full of Memories (2001) is an installation with an accompanying Web site, explicitly focuses
on the “mechanics” of database construction and the way in which we arrive at levels of evaluation
though linguistic description.
Pockets Full of Memories creates an “anatomy” of personal value by inviting visitors to digitally scan an
object in their possession at a scanning station and answer a set of questions regarding the object, rating it
according to certain attributes. An algorithm (the Kohonen self-organizing algorithm) classifies them in a
two-dimensional map. The algorithm classifies one object after the other and positions it to the closest
values for the respective object.
Users can review each object’s data and add their own personal comments and stories. The result of the
project is a growing map of relations between items that range from the merely functional to a signifier of
personal value. The mapping of these objects illuminates how each object is contextualized by its
surrounding data and points, to the potentiality and absurdities of classifying objects endowed with
personal meaning. The project operates on the threshold between logical classification and meanings that
seem to elude quantifiable values.
Legrady’s interactive CD-ROM/installation Slippery Traces (1997) invited viewers to navigate through more
than 240 linked postcards—ranging from tourist sites to personal, military, or industrial images—that were
categorized according to topics such as nature, culture, technology, morality, industrial, and urban
environments. Viewers first choose one of three quotes appearing on the screen, each of which embodies a
different perspective anthropological, colonialist, or media theory, and thus provides an interpretive angle
for the experience of the project.
This narrative is obviously very much dependent on the database system as a whole—the interplay
between the container, the algorithm, and the interpretation of the user. Database categories and the
algorithms filtering them are never value-free but always are inscribed with an interpretive angle.
The characteristics of the database as a collection of information that can be structured according to
various criteria and result in a meta-narrative in many ways differ from the concept of the traditional
narrative.
A project that juxtaposes and fuses the “narrative engines” of the database and photography/moving
images is Natalie Bookchin’s CD-ROM, The Databank of The Everyday (1996). Bookchin’s project, a
conceptually infinite database of life itself in all its mundane activities, uses elements of the computer
database and an image catalog and identifies the loop as a narrative engine driving both. The screen is
divided in two parts, the first one in which it’s shown a woman shaving her leg and in the other the code
and instruction for the arm movement.
A very different, and original approach to moving the background to the foreground unfolds in The Secret
Life of Numbers by Golan Levin, with Mar- tin Wattenberg, Jonathan Feinberg, Shelly Wynecoop, David
Elashoff, and David Becker. The project does not take an existing “story” as its source but studies the
relative popularity of every integer between zero and one million. The artists determined the popularity of
numbers and exposes their secret life as patter and association that reflect cultural interest. The interactive
visualization consists of two interfaces: a histogram where the popularity of numbers is indicated by the
length of lines protruding from them, and a graph consisting of cells that make up a grid arranged in rows of
one hundred where the more popular integers have brighter cells. A menu indicates the value of popularity
for any chosen number and reveals its “associations” (such as historical dates).
The fact that the data were gathered largely through Web-based statistics also becomes a reflection of
people’s interest within this particular environment.
Given the fact that database structure in the broadest sense lies at the root of digital media, it is only natural
that database aesthetics play a major role in digital art and culture. The 1990s were a decade of major
digitization, when libraries, archives, and museum collections were translated into digital format, allowing
for new forms of filtering and relational connections. However, it seems that “database aesthetics” in the
broadest sense has become emblematic of our time, extending beyond the digital realm, and transcending
the traditional archives of the library and museum.
In the digital era, the digital revolution is reconnecting collection-building and curation with the practices of
Digital Humanities. In light of this, the accumulation and care of knowledge were crucial in classical and
premodern regimes, where fragments of information held intrinsic value. With the advent of printing and
modern institutions of memory, a new regime emerged where the proliferation of historical and cultural
information changed the value of past data, now considered supports for knowledge production rather than
possessing intrinsic value.
In modernity, new forms of digital connection develop, including digital collection-building and curation,
multimedia modes of argumentation, large-scale collaborations, and critical editions of multimedia
artifacts. The scope and scale of digital collections far exceed the capacities of traditional institutions,
leading to a crisis in print-based scholarly publishing. Critical curation gains importance as an essential
academic practice in the era of digital and post-digital publishing.
In common parlance, curation refers to the supervision and organization of preserved or exhibited
physical items, although the term has origins in the theological domain, as in curates of the church who
helped care for the souls of the dead.
What they are pointing to is the same urge animating the work of digital humanists: that the mere existence
of vast quantities of data, artifacts, or products is no guarantee of impact or quality. To curate is to filter,
organize, craft, and, ultimately, care for a story composed out of the infinite array of potential tales, relics,
and voices.
In the Digital Humanities, curation refers to a wide range of practices of organizing and re-presenting the
cultural record of humankind in order to create value, impact, and quality.
In the digital age, digital environments allow the merging of different versions of a work, tracking
developments and variants. Textual fluidity, highlighted by the advent of word processing and hypertext,
offers new opportunities for manipulation and analysis through tools like natural language processing.
Also important is the shift from individual authorial identity to the collective and aggregated identity in fluid
textuality, along with the emergence of augmented editions that expand analysis and editing practices,
enabling a deeper understanding of works in a broader cultural context.
Digital environments create new modes of relational content, with sequence, juxtaposition, and
navigation defining augmented editions.
The quantity of produced data is staggering. Statistics vary, but there are over 21 billion indexed web pages,
one trillion URLs indexed by Google, and over 14 million cataloged books. This exponential increase in the
production, sharing, and archiving of cultural material, including texts, images, audio, and temporal data,
surpasses our capacity to manage and analyze it meaningfully.
Humanities, historically focused on detailed analysis of limited datasets, now face the challenge of
analyzing the growing volume of data. While the sciences have often employed the law of large numbers,
obtaining more verifiable conclusions with larger datasets, the humanities have traditionally worked with
smaller datasets. With digital technologies, the ability to store and analyze millions of books, billions of
tweets, and hundreds of billions of interactions opens new ways to query the cultural record and
understand concepts, trends, actions, and communication flows on a macro scale.
Digital analysis offers new perspectives on human relationships with images, raising questions about the
market value of images, differences between freely circulated and paid images, and the ability to identify
global patterns and regional differences in visual culture.
Therefore, it is necessary to develop new tools to examine, analyze, visualize, map, and assess the
abundance of data and cultural material produced in the digital age. These activities will require
humanists to engage with tools such as text-mining, machine reading, and various forms of algorithmic
analysis.
"Distant reading," an analysis focusing on larger units to reveal patterns and interconnections through
shapes, relations, models, and structures, can be a useful approach. However, it is now time to consider
"machine reading," where trends, correlations, and relationships are extracted through computational
methods.
As information production surpasses human understanding, it becomes impossible to analyze the digital
cultural record without the assistance of digital tools and methods. An ethically complex example involves
applying machine reading and algorithmic analysis to the 52,000 video testimonies on the Holocaust in the
USC Shoah Foundation Institute archives.
Transforming these testimonies into units of data and statistical analyses could further objectify the victims,
introducing a form of dehumanization. Questions arise about the "ethics of the algorithm" and the
possibility of mediating between the ethical need to listen to individual Holocaust testimonies and the
macroscopic perspective offered by statistical representations. Digital humanists are crucial to integrating
technological tools with values, critical skills, and historical knowledge inherent to the humanities
disciplines.
The expansion of digital databases and advancements in text-mining and cultural analytics have introduced
new approaches through distant reading. In Digital Humanities, distant reading deliberately disregards the
specifics of individual texts in favor of extracting trends from a corpus.
Distant reading is not merely a digitization but represents a novel research approach, utilizing
computational methods to explore the history of ideas, language use, cultural values, and cultural
processes. Contrary to the opposition between distant reading and close reading, connections between
macro and micro, overarching trends, and in-depth hermeneutic inquiries emerge, providing a flexible
approach.
Digital humanists navigate between visualizations, zooming in on global trends and focusing on detailed
analyses. The combination of census data with oral histories and biographies illustrates complexity, as does
the view from Google Earth, integrating satellite data with personal experiences. Innovation in Digital
Humanities can lead to advanced mapping approaches, exploring diverse geographies and geospatial
representations, but designing such platforms remains a challenge.
Cultural analysis does not directly deal with cultural artifacts but operates on digital models of these
materials in an aggregated manner. Instead of pitting "close" hermeneutic readings against "distant" data
mappings, the goal is to appreciate the synergies and tensions between in-depth, localized analysis and a
macrocosmic view.
Furthermore, cultural analysis expands the canon of objects and cultural materials considered by
humanities scholars, encompassing both digitally transformed traditional cultural objects accessible in
various formats and "born digital" objects. The aggregation of vast amounts of information allows the
merging of data or files, presenting them in displays that highlight distinctive features such as data points,
clusters, and trends. Structured data lend themselves to this processing, as one can easily extract dates,
places, quantitative information, names, or other elements from a set of files to analyze their contents.
In text processing, the analysis of word frequency and usage (e.g., the n-gram approach) is a way of
aggregating information and data, merging individual instances to extract information from the whole.
Cultural mashups often aggregate materials in novel ways, allowing digital manipulation to repurpose
sources. On the other hand, composite analysis preserves individual elements but uses patterns among
them to show something about the whole set of discrete elements.
Data mining is a term that encompasses a variety of techniques for analyzing digital material by
"parameterizing" some feature of information and extracting it. This implies that any element of a file or
collection of files that can be explicitly specified or parameterized can be extracted from those files for
analysis. The "mining" of this data often involves creating a visualization of the results as statistics, texts, or
in an information graphic known as data visualization.
Data visualization techniques, derived from large datasets, are often associated with visualizing
quantitative information. Mapping, based on the history of cartography, ranges from historical mapping to
psychogeography, enabling the creation of new critical awareness of urban environments. Cognitive maps
model human experience without necessarily anchoring to quantitative data.
Experiential visualization uses movement through time and space in three-dimensional environments.
Historical simulations allow immersive explorations, privileging interpretation and experimentation rather
than a positivistic reconstruction of historical reality.
Visualization can model arguments, map constituent parts, or formulate initial concepts, revealing
imaginative and conceptual potential. Its integration with traditional approaches enriches the discussion on
meaning production, demanding digital literacy and careful assessments of meaning in new formats.
Experiential visualization leverages movement in three-dimensional environments over time and space as
the primary mode of engagement. Historical simulation environments offer immersive experiences,
prioritizing interpretation, analysis, and experimentation, allowing for new research questions. They do not
aim to represent the past in a positivistic manner but rather investigate a state of knowledge.
Visualization can be employed in various ways to outline, map, or model an argument. It has the power to
unleash imaginative and conceptual potential, enabling humanists to model knowledge in new ways. The
integration of these techniques with traditional methods enriches the discussion on meaning production,
providing valuable contributions that combine distant readings, attention to individual texts, and
aggregation techniques to explore extraordinary aspects and anomalies. Competence in reading and digital
visualization is essential for assessing meaning in these new formats.
Despite these changes, challenges persist, particularly regarding inequalities in access and participation in
the connected world. Various interests compete for control, and there are disparities in cultural property
attitudes, licensing, and sustainability pressures. Collaboration among researchers is reshaping the
academic landscape, allowing for the creation, narration, and enrichment of physical landscapes through
various technologies.
A significant subfield in the digital humanities is "Digital Cultural Mapping" or "Spatial Humanities." This
field utilizes geographic analysis, digital mapping platforms, and interpretive historical practices to conduct
multidimensional investigations of specific places. Unlike traditional mapping approaches, these practices
prioritize experiential navigation, epistemologies of representation, and the rhetoric of visualization.
In summary, the digital transformation is profoundly altering how knowledge is conceived and shared in the
humanities, fostering greater interactivity and novel exploration of places through innovative mapping and
representation approaches.
Accumulation is no longer sufficient to preserve cultural heritage; "animation" is necessary from the
archival stage. This involves a user-centered approach in building archives, engaging user communities, and
integrating curation tools into access portals. Digitization allows institutions to expand their virtual
footprint, surpassing physical barriers and engaging local and global communities. This opportunity space
is opened by the Digital Humanities, offering the chance to completely redefine communities, the public,
and missions.
Glocalization, combining local and global elements, will become more prominent. The challenge for
institutions is to build platforms that strengthen access, outreach, and create new models of imagination,
quality, and rigor.
Digital Humanities projects involve multidisciplinary teams, with professionals and citizen-scholars
contributing. Distributed access allows the public to interact with content through various points and
platforms. Ambient networks, with mobile devices, offer new ways of accessing and interacting with
knowledge.
Humanities Gaming
Digital Humanities students engage in interactive games such as "Soweto '76," "Metadata Games," and
"Virtual Peace," exploring themes like race, power, and education. Games, once underestimated in
academia, are emerging as valuable teaching tools, leveraging the surge in processing power and
connectivity.
Projects involve multidisciplinary teams and reflect the acculturation of a generation raised on gaming. The
games' ability to provide engaging interaction and satisfaction through overcoming challenges makes them
valuable educational instruments. The future challenge is to integrate the fun of humanities research into
academic games.
Scholars begin by exploring the history of encoding practices, from punch cards used in Jacquard looms to
modern programming languages. Software is analyzed as a language with grammar and syntax, with a
specific focus on algorithms and their cultural context. The fascination with game engines drives research in
narrative design and complex game interactions. Critical analysis views hardware and software elements as
study objects, providing a deep understanding of computational processes as forms of composition.
Database Documentary
The database documentary is a genre within Digital Humanities characterized by its modular, branching, and
hypertextual structure. Unlike cinematic documentaries, database documentaries do not follow a linear
narrative but are constructed as paths through a real or virtual database. They utilize a wide range of media,
including film, video, sound, static images, text, animation, and digital documents. Users don't "watch" the
documentary but "perform" it by following guided paths.
These documentaries provide greater flexibility in movement and pacing, allowing users to control the
temporal sequence, duration, and other critical elements. The multilinear nature of database
documentaries presents unique challenges but also offers new opportunities for scholarly argumentation
and creative expression.
The remix culture, prevalent in the participatory web, enables individuals to create original works by
deriving from others, forming a complex set of associations and references. While universities have
traditionally emphasized the originality of research, the remixable and repurposable culture calls for a
reconsideration of academic structure and its adaptability. Flexibility and creativity, rather than static
knowledge, should be valued, opening new ways of thinking and problem-solving.
Pervasive Infrastructure
Digitization enables the sharing of vast datasets through standards-compliant web services and dynamic
cloud computing. Web services facilitate machine-to-machine communication, allowing access to various
types of data through specific queries. Cloud computing provides an extensible framework for storing and
accessing massive datasets from any computer connected to the network.
This implies that entire research datasets can be shared online, enabling others to test hypotheses and
modify the original data. However, ethical concerns regarding data validity, privacy, and misappropriation
exist. Polymorphic browsing allows users to access and analyze heterogeneous data streams, integrating
data from disparate sources in customizable ways. For literature scholars, this means having access to every
word in every edition of every book ever published, customizing searches to address specific research
questions.
Ubiquitous Scholarship
The new emerging forms and methodologies in the humanities reveal that the ways in which knowledge
manifests itself can no longer be considered given. The tools of humanistic inquiry are now objects of
research and experimentation as much as the modes of production and dissemination of knowledge.
Statistics and graphic design respectively challenge the boundaries of qualitative human sciences and
present new opportunities for research and experimentation.
In this scenario, smartphones and other location-based mobile devices play a key role in decentralizing
humanistic practice. Their ubiquity allows for connecting web-based knowledge resources to physical
locations, enabling narration, annotation, and enrichment of the physical landscape with web-based
archival sources. This has made it possible to share entire sets of research data online, allowing others to
test hypotheses and modify the original data.
The advent of augmented reality and ubiquitous computing has further extended the possibilities, allowing
for a layered and site-specific presentation of events and interpretations. This transformation has opened
new horizons for participation, dissent, and freedom in the production and sharing of knowledge and
culture.
Il risultato finale è una “passeggiata” ipnotica sulle tracce della storia dell’arte italiana progettata
appositamente per la sala immersiva di MEET. Renaissance Dreams resta fruibile al centro internazionale di
cultura digitale di Milano esclusivamente su prenotazione nel giorno e nell’ora selezionati.
La rivoluzione dell’intangibile: dal 7 al 27 settembre 2022 al MEET, la prima mostra immersiva dedicata al
fenomeno degli NFT e all’estetica del metaverso
Mapping NFTs - Mauro Martino, artista e scienziato, esperto di intelligenza artificiale e di data
visualization, ci conduce al cuore dell’affascinante universo degli NFT: i non-fungible token che, introdotti
come smart contract di proprietà digitale, stanno rapidamente trasformando il mondo dell’arte,
dell’architettura e della moda.
Per mezzo di una video istallazione immersiva a tutto schermo realizzata mediante tecniche avanzate di
deep-learning e visual design, Martino racconta 5 anni di evoluzione dell’estetica e del mercato degli NFT,
consentendo al fruitore di esperire la dimensione qualitativa di una rivoluzione intangibile, destinata a
influenzare il tessuto ludico e produttivo delle comunità umane.
Fondata su di un’impressionante quantità di dati raccolti in oltre 3 anni di ricerca e pubblicazioni scientifiche
– 5 milioni di NFT, decine di milioni di contrattazioni nel mercato analizzate, centinaia di blockchain
esaminate – Mapping the NFT revolution è non solo l’esplorazione più estensiva condotta fino ad oggi sul
fenomeno degli NFT, ma anche una riflessione artistica sulle loro potenzialità e soprattutto sul continuum
spazio-temporale che li rende possibili (il metaverso): un data-film emozionante, che getta un ponte tra
fisico e digitale, iscrivendo la tecnologia del futuro in un progetto creativo fluido, sospeso tra realtà e
immaginario.
Immergersi nell’exhibition – completata da una seconda sala, volta a offrire una panoramica completa sul
lavoro di Martino come A.I. artist – è insomma un esperimento poetico e sociale insieme: un’opportunità
non solo di lasciarsi sedurre dall’utopia virtuale, ma di comprenderla per farne parte in modo consapevole,
per costruire e abitare responsabilmente nuovi spazi immateriali e paralleli.
Dal 14 aprile all’8 maggio 2022, MEET presenta The Lift, l’opera immersiva site-specific di Fabio
Giampietro (Milano, 1974), prodotta in collaborazione con Valuart. Attivo nel panorama artistico da più di
20 anni, Fabio Giampietro è tra i pionieri della rivoluzione NFT e tra i principali esponenti della scena
artistica digitale italiana e internazionale. Il lavoro artistico di Giampietro esplora le possibilità creative che
emergono dall’intersezione tra arte e tecnologia, tema che rappresenta un asse portante nella
programmazione di MEET.
The Lift è un progetto dall’alto contenuto tecnologico nel quale Giampietro coinvolge il pubblico rendendolo
complice del racconto. L’esperienza inizia all’interno di una galleria d’arte, durante l’inaugurazione di una
personale dello stesso artista; ai muri, sette grandi tele che riproducono i paesaggi di Fabio Giampietro,
avvolgono lo spettatore.
I visitatori di The Lift vengono accompagnati lungo il percorso espositivo dalla voce di una misteriosa guida
che l’inviterà a raccogliersi al centro della sala per abbracciare in uno sguardo più ampio il lavoro di
Giampietro. Improvvisamente, le pareti della galleria cadranno come carte, svelando un unico panorama
circolare estratto precisamente dai quadri, anch’essi crollati insieme ai muri. È qui che comincia il viaggio: il
panorama circostante inizia a muoversi, trasformando la stanza in un enorme ascensore a vetri che sale nel
mezzo di una metropoli, accelerando fino a catapultarlo fuori dal grattacielo stesso, oltre la città, e a
fluttuare nello spazio circondato da forme geometriche, creature e personaggi strani che abitano i social
network, il mondo dei videogiochi, e l’immaginario dell’artista stesso.
Il viaggio non è finito; dopo un momento di pausa l’ascensore precipiterà velocemente verso il basso,
accompagnato da un grande frastuono; il polverone sollevato dallo schianto si diraderà rivelando una
inaspettata conclusione.
Fabio Giampietro è nato nel 1974 a Milano, dove vive e lavora. Attraverso la sua caratteristica tecnica unica
– dipinge sottraendo il colore a olio dalla tela – esprime una forte e intensa pittura figurativa.
Considerato uno dei precursori dell’arte immersiva, con le sue installazioni VR, Fabio Giampietro ha trasceso
il “mondo tangibile” connettendolo definitivamente al “metaverso”. Un visionario che mai smette di stupire.
Dopo diverse esposizioni a Milano, tra cui la personale a Palazzo Reale, a Bologna, a Venezia, a Shanghai, a
Miami, a Los Angeles, a San Francisco, a Berlino e a Toronto, Fabio Giampietro si è ritagliato un ruolo da
protagonista nel panorama artistico italiano e internazionale.
Michel de Certeau (1984) makes an important distinction between space and place, saying that space is a
“practiced place” in the sense that people walk in transform a place into a space.
Henry Lefebvre (1991) speak of the “representational space” that is a lived space, a social space distinct
product of every society.
Museums pass through “place-based cultural institution” to “dispersed (post)modern space.” This because
technology as changed all the aspects and ways in which we get the information, or we do everything. We
are in the digital age.
The post postmodern age of museums today represents not so much a step forward, but a step backward,
embracing more traditional elements still within the context of the digital age that assimilate well with
populist tendencies from the late twentieth century. It’s better to not consider a rigidity in defining the
museums because the fluidity that is typical of our times could help us.
Understand a museum in digital age its to understand how its online (global) community is related to its
physical (local) community and to all the points and flows of interaction within its distributed network. The
space of museum is determined top-down by the cultural elites, bottom-up by the communities that
somehow reflects both interests.
The study of culture is thus the study of the machinery individuals and groups of individuals employ to
orient themselves in a world otherwise opaque (Clifford Geertz, 1973). Than we have Slack and Wise that
change the prospective on the “problem”, it’s cultural determinism and not technological determinism.
Culture develops so it develops also new technologies to accomplish goals. Another POV is that culture is a
process rather than an absolute, a process of ordering, not of disruption. Rosalind Williams, encourage to
view the technology as an environment and not as a tool.
We have 5 cases of study. 5 museums that are studied in relation with their “digital” approach:
1. The Indianapolis Museum of Art (IMA) located in the “crossroads of America”. The city is most
known for its sports passion, but the museum has represented the USA in 2011 Venice Biennale.
The audience is not technologically comfortable but using online platform and sites, the audience is
larger, this is how they “bring” visitors.
2. The Wlaker Art Center is situated in Minneapolis, same area of Indianapolis but with big
differences. The city has a long theatral traditional and the museum is more concentrate on the
contemporary art. Herzog and de Meuron have also built the new part of it. They have programs for
teens provide by a group of artists and educator.
3. The San Francisco Museum of Modern Art (SFMOMA) has also a long cultural tradition, more focus
on local historical preservation. The city is near the Silicon Valley. These two aspects collapse
together to have a museum that is technologically higher than the others and also in online
platform.
4. The Museum of Modern Art in New York (MoMA) stands in the highest educated area of the five
museums and it’s in the center of NY, so tourists are most likely to go there. They have 2.8 million
visitors a year and the 60% are foreigners. The site is used to better serve the international
audience.
5. The Brooklyn Museum was opened one year before Brooklyn became part of New York and now it
has the biggest community of all the five museums with 2.5 million people. The use of technologies
can be characterized by a focus on its visitors and a willingness to take risks.
Also, if we think that space is no more important, it is. Bruno Latour writes about what he calls the
rematerialization of digital techniques: “the expansion of digitality has enormously increased the material
dimension of networks: the more digital, the less virtual and the more material a given activity becomes.”
We could assume that every piece of digital art produced is “unbreakable” but it’s not true. The
obsolescence is a big problem. Hardware and Software become obsolete way faster than other art support.
In order to preserve them we have to do a proactive care such as digitization & format migration. Also the
digital preservation has its problems. The natural process through which the TMB passes is: exhibiting and
the documenting or archiving.
"Medium independent behaviors”: Variable Media Network (1999-2004)? → Variable Media Questionnaire
- Storage
- Migration
- Emulation
- Re-interpretation
Case study I – Il nuotatore (va troppo spesso ad Heidelberg) – Studio Azzurro (1984)
The artwork has been created by Studio Azzurro, the first installation was made in 1984 and the
characteristics were:
After that time, the installation was renewed, and the characteristics changed:
It was restored by IMAI Foundation in 2006. The job has different goals. The first one migration to digital
Betacam tapes, so to U-Matic tapes and laserdiscs. The second one the creation of a restored digital
Betacam tape form the previous U-Matic tapes and laserdiscs. The last step was the exhibition of the
copies in MPEG-2 format.
Case study II – My boyfriend come back from the war – Olia Lialina (1996)
1. Phase One: Systematic mapping of the documents, subdividing them by documentary type, i.e.
correspondence, press reviews, artist's exhibition catalogues (dividing into “categories” such as
places, timeline, artworks, people, document types, and themes)
2. Phase Two: Outline the methodologies and technological applications best suited to this specific
context (VR, AR, and AI) (an example could be the KET application on Monument to commemorate
Auschwitz’s victims)
3. Phase Three: Digital implementation program (website, social media pages), VR exhibition,
workshops, automatic mapping via AI of the artist's works in auction houses and art galleries
(digitization archival material, implementation digital platforms, workshops, digital VR exhibition,
mapping via AI of the artist’s work in auction houses and art galleries)
The approach chosen was interdisciplinary through the field of conservation, valorization, inclusion &
accessibility.
NEFFIE Project
Neffie project comes from two different approaches, the first one is the technological and in particular
technological innovation by Alberto Sama. The other prospective is the art-historical theoretical thinking
supported by Francesca Pola. The project is divided into different steps:
Vacari says about his work: “From a structural point of view, the novelty of the exhibition in real time is the
feedback. The process triggered by me works as if it was an organism that is expanding and interacting with
every aspect of the environment in which is located. Once the process was triggered, the technological
unconscious of the machine, stimulated by the environment, produced a new reality which fulfilled its own
necessary; it’s ability to avoid manipulation and to resist being used for personal reasons, guaranteed that
something new and unexpected would be seen.”
NEFFIE reconfirms the importance and potential of the photographic medium as a tool of awareness and
cultural emancipation and highlights how the technological unconscious can today be defined as
neuroaesthetic unconscious.
The NEFFIE METAVERSE is the new virtual environment designed in collaboration with ETT S.p.A. that
enables immersive enjoyment of the NEFFIE meta pictures, with the ability to interact in real time with
other visitors through their avatars (max 50).
NFTs
Non-Fungible Tokens (NFTs) are distinct pieces of digital media that are verifiably scarce and unique. This
is powered by blockchain technology, NFTs are already reshaping the landscape of Digital Art.
To understand better, fungible is any asset that retains equal extrinsic value to an asset of like kind.
Non fungibles are all the assets that aren’t changeable for other items (they are unique).
NFTs allow the trade of unique items within the digital space by providing the ownership (collectors) and
the paternity (authors). But how does the Block Chain works?
1. You send a number of bitcoins from one address to another. A transaction is requested in the
network.
2. The transaction request is then sent to a peer-to-peer network consisting of nodes
3. Bitcoin’s validation process and implementation of new block is done by miners who are rewarded
for their participation in securing the network with newly minted units of the cryptocurrency
4. The Bitcoin network of nodes validate the transaction using cryptographic algorithms. This process
is called mining
5. The new block, which now contains your transaction is then added at the end of existing blockchain
6. Your transaction is now completed and the block that includes your transaction is now integrated
with the Bitcoin blockchain
Glossary from Palazzo Strozzi Site
Digital art: The term digital art refers to works of created or presented using digital technologies. The first
examples date back to the 1960s, but the practice has evolved in parallel with the development of
technology and the creation of software dedicated to the creation and enjoyment of works of art. Over the
decades digital graphics in particular have developed massively and at great speed, going from 2D to 3D and
on to augmented and virtual reality in a context of ever greater interaction with the audience. In recent
years digital has acquired a new artistic and market values thanks to the rise of the Cryptoart movement.
Cryptoart: Cryptoart (also known as cryptographic art) is an art movement devoted to the creation of
digital works of art linked to blockchain technology which makes it possible to own, transfer and sell a work
of art in a cryptographically secure and verifiable manner. In 2017 the first blockchain systems also became
accessible to a whole new generation of artists close to the tech world who instantly intuited their
potential, experimenting the creation works of art based on digital illustration and programming.
Blockchain: A blockchain is a digital data record documenting changes of ownership of digital items among
different players. This record has the specific characteristic of being shared and unchangeable. The data is
grouped into “blocks” connected in chronological order and distributed in a decentralized archive. Specific
feature is that the record cannot be modified, only expanded. Once created, blocks can no longer be altered
or eliminated but they can be added to each time a transaction takes place. This technology was first
introduced in 2008 as an IT structure underpinning the Bitcoin cryptocurrency by an anonymous inventor
(or group of inventors) going by the pseudonym of Satoshi Nakamoto. The most commonly used
blockchains include Ethereum, still one of the most widespread in the art world. The exponential growth of
interest in this new technology has triggered strong criticism as to its sustainability, given its high energy
consumption. New, greener chains have recently seen the light of day, including Polygon, Algorand and
Tezos whose energy footprint is over 90% less than that of Ethereum of the Bitcoin.
NFT: An NFT (an acronym for Non-Fungible Token) is a digital certificate testifying to the characteristics,
originality and single ownership of a physical or digital asset, registered in unalterable cryptographic records
based on blockchain technology. An item certified with an NFT is original and non-interchangeable: it is not
a duplicate or a reproduction. Thus an NFT is a tool for tracing the ownership of digital files. Whoever buys
one is not buying the copyright to the item but a certificate allowing him or her to prove its ownership. The
first artwork ever recorded on a blockchain was Quantum, created in 2014 by the artist Kevin McCoy who
chose to record his work as an NFT in order to make the file unique, traceable and exchangeable.
Wallet: To create, keep, buy or sell NFT’s and cryptocurrencies (such as Bitcoins or Ethereums) you need a
digital wallet. This is an application that is used to memorise and keep cryptocurrencies and NFTs. The
wallet must be compatible with the blockchain on which the NFT has been created. The ERC-721 token on
the blockchain of Ethereum is the standard one, widely used by the creators of digital artworks in single
editions.
Minting: The act of publishing an NFT certificate on a blockchain is known as minting. Once minted, an NFT
acquires specific characteristics such as uniqueness (in the sense of a single edition) or as being part of an
edition based on the type of standard chosen for its creation.
Smart contract: Smart contracts are IT protocols that facilitate, verify or ensure compliance with the
execution of a contract. Smart contracts contain contractual clauses that follow the “if/then” function: i.e. if
a given condition predetermined by the parties to the contract occurs, then an agreed action will take
place. To ensure that the contract is not altered, it is recorded on blockchain technology, thus guaranteeing
its integrity and security.
Artificial Intelligence: Artificial Intelligence (AI) is the IT discipline that studies the development of
hardware and software with specific capabilities typical of the human being (interaction with one’s
surroundings, learning and adaptation, reasoning and planning). AI is based on algorithms, a group of
instructions that are applied in order to perform an action or to resolve a problem. AI systems are capable
of performing tasks independently and of making decisions typical of human beings and thus potentially
also of taking their place. In the art field that translates into the creative possibility of having a dialogue
between a man and a machine in which an artificial assistant can become the co-author of a work of art.
Metaverse: The metaverse is an interactive virtual space shared over the Internet and accessible to users
via an avatar. Avatars can move freely in this three-dimensional space, interacting to each other and
engaging in activities like in the real world. Physical space and the metaverse can link up through the use of
a simply mobile phone or computer, or through more immersive experiences using such devices as virtual
reality visors. The word metaverse was first used in a sci-fi novel entitled Snow Crash (1992), in which
author Neal Stephenson envisaged the birth of a virtual immersive world populated by avatars. The Second
Life (2003) and Minecraft (2009) platforms may be considered the first examples of the metaverse.
Metaverses have been created in recent years based on blockchain technology, for example Decentraland
and The Sandbox. Mark Zuckerberg has recently rebranded his company Facebook as Meta Inc, intuiting
the potential development of the phenomenon in the social media.
Visual Art Making and the Digital
We have to underline so differences between the terminology that we are going to use:
- Digital Art: art that uses digital technology, such as computers, to produce creative forms,
whether written, visual, aural, or increasingly the case, in multimedia hybrid form
- Net Art: art that uses the Internet as its medium and cannot be experienced in any other way
Paik is one of the most important art figures in the XX century. He was Korean but he left the country for
the war and established himself in Cina and Japan and then in Germany. His biggest community is in New
York. He is one of the first artists to take technology seriously.
The Smithsonian Museum had collected some of his best work such as the Megatron/Matrix done in 1995,
the work before the superhighway. The artwork has a part of it on the wall near it, the Hawaii and Alaska
part. 336 television sets, 50 DVD players (now media player), 114300 meters of cables and 17526 meters of
neon tubing.
The associations between the images and the state in which they stand are assumption based on his life
and friends. Then in the state of New York there’s a small spot in which a real time camera and a little
monitor so you can “shoot” yourself and be part of the installation in real time. Important are the
soundtracks of every state.
One of the biggest ideas behind the work is the OPEN ROAD. That idea has to make us think about if it is
safe for everyone in the same way.
We're actually in a very interesting period right now because video was really first used by artists in the
1960s and this was well before video became a household word, and in the country. Most people were
familiar with it there was some early technology that was developed by companies like Sony which shrank
the size of these very large almost refrigerator size equipment that we found in television studios of the of
the era.
Artists began to discover this medium black and white rather crude by today's standards but nonetheless
with incredibly interesting possibilities in visual terms and began using it and so we're really at a period now
where I think we can safely say that the early days of video are over. I think we're seeing now a second and
third generation of artists who grew up with video who are familiar with it and who are using it very
effectively.
I think museums like the Whitney have pioneered the use of video they had a video gallery in the early
1970s a space devoted just to video art in the museum. We're at a transition period (1995), museums that
have not really shown much video now are really looking at it as a very important art form and medium.
That's relevant today because we live in an instant communication so what artists are doing is the same at
use televisions and technologies in other ways, different by the main aim of the object.
Other important in that period are Gary Hill, Mary Lussier, Derek Birnbaum, Marcel Oden Bach. These new
technologies are important if you think that can bring us a message from the President of the United States
that's going to change world history and a moving pattern of abstract electronic imagery that's going to give
you a purely visual aesthetic experience and within that range there is a tremendous range of opportunity
and possibilities.
I DON'T THINK IT'S BEEN SINCE REALLY THE RENAISSANCE WHERE ARTISTS HAVE BEEN ABLE TO USE A
MEDIUM THAT ONE COULD SAY IS THE DOMINANT COMMUNICATION FORM OF THE SOCIETY.
I make my art with the same medium that people use in their everyday lives. One of the beautiful things is
also that I don’t have to “create” something, but I can use something that already exist to create something
new.
The Greeting is a piece, exposes in Venice at the biennial, that was conceived as an image on the wall of a
room other installation works, I do involve the whole space with multiple images. This is a single image, and
it was loosely based on a painting by Pontormo, a 16th century Italian artist.
The interesting thing about it is that it's a very common social situation of three women there are two
women and one woman enters while two were there and they meet each other and greet each other but,
in my artwork, it was a shot at 3,300 frames a second actually on film and so very simple normal events
that occur in flashes become extended as a kind of a choreography or a ballet in time. Sound is the basis
of images for me. Video art will create some problems for archival kind of issues and preservation issues.
BILL VIOLA - Martyrs (Earth, Air, Fire, Water) - St. Paul’s Cathedral – 2014
We've put a place for four plasma screens to be side-by-side in the vertical format, they are narrow long not
wide and represent four examples of martyrdom.
Martyrdom notion is all about accepting your fate and that you are sacrificing yourself for some higher good
or for some ideal that you have in inside and we're meeting these four people on these screens at a time
when they've already accepted it.
You have John Haigh, the water person, left for dead and then he's hoisted up literally hoisted upside down
with his bombs out and this is the position of St. Peter crucifixion. Then he kind of wakes up again sees
where he is and then he gathers that strength again and then this water comes down on him pours on him
and buffets him and moved around with it and stuff like that and then finally he gets the strength and then
he breaks through and he ascends.
The second one is wind that's done by Sarah and she's battling the wind that's her protagonist. The wind
happens and it gets stronger and stronger and then there's this tussle again and then she breaks through
that she sees where she is she knows she can do this she knows that she can continue.
Then the earth itself and that's an image of a person who's underground who's got dirt on them and stuff
and he's trying to find the way out and that dirt that element is actually going to be run backwards so
instead of the earth just falling down falling down crushing this guy it's a pile of earth and you see a hand
and a leg and you realize it's part of a person and then they sort of come to the surface and then out of that
can kind of come this realization that you can put all those parts together and then suddenly things can
begin rising up and going and always going backwards.
The firework is really intense. We used multiple tanks from different angles so that we can bring together
multiple things in the same frame. He wakes up and he realizes like he's in a really bad situation with fire
around him ready to burn him alive and so then he has to come out of his cocoon and really begin to fight
back. So, he gets the strength miraculously to get to that point he breaks through that and then you begin
to see light at the end of the tunnel and things start calming down and all of a sudden then he can look
upward outward and feel that he finally broke through.
The people are going through this internally not externally. The four elements because death is a force of
nature and they kind of represent the rage of these forces the most intense passion of going through
death and transcendence into light.
Digital Art
Digital Art could be defined as follows: art that explores digital technologies as a medium by making use of
its medium’s key features, such as its real-time, interactive, participatory, generative, and variable
characteristics, or by reflecting upon the nature and impact of digital technologies.
The evolution and definition of digital art are strictly connected with the exploration of digital technologies
as a medium rather than just a production tool. The development of digital art starts from algorithmic
drawings in the 1960s to land in digital-born art in the 1980s-2000s, characterized by real-time,
interactive, and performative features.
The emergence of "post-digital" and "post-internet" art is explored, describing works shaped by digital
processes yet manifesting as traditional art objects. The exhibition "Programmed: Rules, Codes, and
Choreographies in Art, 1965–2018" delves into the historical connections between conceptual, video, and
contemporary digital art. It examines different understandings of a program, connecting digital art to rule-
based conceptual practices and focusing on coding, image sequences, and resolution. The exhibited works,
whether analog or digital, are influenced by the history of science, technology, and art-historical
movements.
The post-World War II era witnessed significant developments in digital media, shaping both theoretical
and technological landscapes. Vannevar Bush's 1945 essay, "As We May Think," introduced the concept of
"memex," a device influencing the evolution of digital computing. The 1960s played a pivotal role in
technological advancements, with Theodor Nelson coining terms like "hypertext" and "hypermedia," laying
the groundwork for today's internet. The emergence of digital computers like ENIAC and UNIVAC in the late
1940s and early 1950s marked a transformative period. Cybernetics, introduced by Norbert Wiener,
explored communication and control systems, influencing artists to experiment with cybernetic art in the
form of responsive sculptures.
The 1960s saw the birth of key concepts in computer technology, including graphical user interfaces and
information space. Douglas Engelbart's ideas of bitmapping and direct manipulation through a mouse,
along with advancements at Xerox PARC, led to the creation of graphical user interfaces. Artists, such as
those associated with Experiments in Art and Technology (EAT), explored collaborations between engineers
and artists, resulting in groundbreaking performances and installations. International exhibitions like Nove
tendencies and Cybernetic Serendipity showcased early computer art, while artists experimented with
satellite telecasts and interactive performances, anticipating today's online interactions.
Throughout the 1970s and 1980s, artists from various disciplines embraced computer-imaging techniques,
laying the foundation for diverse digital art practices in the following decades.
The advent of the World Wide Web in the mid-1990s gave rise to NET ART, evolving in the 2000s with
critical engagements on platforms like Facebook, YouTube, Twitter, and Instagram. As digital technologies
became pervasive in the "internet of things," artists began exploring post-digital practices, creating works
deeply influenced by digital technologies.
The works featured in the exhibition "Programmed" are situated within the broader context of
technological, scientific, and art-historical developments. The "Rule, Instruction, Algorithm" section
explores a lineage connecting early rule, and instruction, based art forms to algorithmic art and practices
establishing open technological systems. Josef Albers' screen prints, influenced by Bauhaus color theory, are
juxtaposed with highlighting the conceptual connections between Bauhaus experiments and modern
software-driven art.
The historical lineage of instruction and rule-based practice is traced through movements like Dada, Fluxus,
and conceptual art. Sol LeWitt's conceptual art, emphasizing the idea as a driving force, is contrasted with
Casey Reas's {Software} Structures, explicitly referencing LeWitt's work. LeWitt's Wall Drawing #289 is
displayed alongside Reas's software, illustrating the intersection of conceptual and software art.
The 1960s witnessed the emergence of early algorithmic art, known as Algorists, who wrote code stored
on punch cards to create "digital drawings." Artists like Harold Cohen, Herbert Franke, and Chuck Csuri
pioneered this form. The Fluxus group's events and happenings, emphasizing audience participation and
precise instructions, foreshadowed today's interactive computer artworks.
Programmed delves into the use of instructions and language as materials, exemplified by Lawrence
Weiner and Joseph Kosuth's works using language as art. W. Bradford Paley's CodeProfiles and the
collaboration between Lucinda Childs, Sol LeWitt, and Philip Glass showcase the integration of language,
instruction, and form in digital and conceptual art.
A group of works by Ian Cheng, Alex Dodge, and Cheyney Thompson explores the generative qualities of
rule-based systems.
GENERATIVE ART, defined by Philip Galanter, involves artists setting systems into motion, contributing to,
or resulting in completed artworks. The exhibition connects generative art practices to ancient traditions,
such as Islamic patterns and the Jacquard loom, highlighting their role in the history of computing. The
generative patterns in the artworks are set in motion by systems, including artificial intelligence
conversations and the Drunken Walk algorithm, emphasizing the crucial role of generative processes in the
history of computing.
This section (Signal, Sequence, Resolution) of the exhibition explores a historical trajectory distinct from
conceptual practices, focusing on light, the moving image, and optical environments. Early kinetic art and its
evolution into new digital forms of cinema and interactive television are examined within the context of
scholars like Oliver Grau and Erkki Huhtamo. Pre-digital works from the 1960s, such as Nam June Paik's
Magnet TV and Earl Reiback's Thrust, play with electronic signals, illustrating the concept of the electronic
signal as a carrier of instructions and visual information.
The evolution of digital cinema and the moving image, influenced by animation and pre-cinematic devices,
is highlighted. Lillian Schwartz's groundbreaking films, John Whitney Sr., and Chuck Csuri's contributions in
the 1960s, and Gene Youngblood's concept of "expanded cinema" are explored. Nam June Paik's Fin de
Siècle II and Steina's Mynd demonstrate the creation of visual spaces, while Jim Campbell's Tilted Plane
explores resolution and pixelation.
Digital technologies in conventional cinema challenge traditional notions of realism and representation.
The digital medium redefines cinema's identity, offering instant copying, remixing, and blending disparate
visual elements into simulated reality. The potential for interaction in digital film challenges narrative
conventions. Lynn Hershman Leeson's Lorna, the first interactive LaserDisc artwork, exemplifies this,
offering a branching narrative navigated by viewers via a remote control.
Artists in Programmed critically engage with the social, cultural, and political impacts of digital
technologies. Keith and Mendi Obadike's The Interaction of Coloreds satirizes online commerce's biases,
while Jonah Brucker-Cohen and Katherine Moriwaki's America’s Got No Talent visualizes Twitter feeds
surrounding TV talent shows, examining the impact of social media on opinion and sentiment. These
artworks point to the profound changes in data processing, communication frameworks, and societal
fabric brought about by digital technologies. Digital technologies and interactive media have transformed
artistic practices, challenging traditional notions and engaging audiences in new ways.
THE ROLE OF THE ARTIST HAS SHIFTED FROM SOLE CREATOR TO MEDIATOR OR FACILITATOR, OFTEN
COLLABORATING WITH PROGRAMMERS, ENGINEERS, SCIENTISTS, AND DESIGNERS.
Digital art defies easy categorization, collapsing boundaries between art, science, technology, and design.
Programmed, by presenting examples of artistic engagement with rules and codes over the past fifty years,
illuminates the rich and complex histories of art, science, and technology, shaping and nurturing each
other's evolution and impacting how societies and cultures are constructed and perceived.
ARTPORT is the Whitney Museum's portal to Internet art and an online gallery space for commissions of net
art and new media art. Originally launched in 2001, artport provides access to original art works
commissioned specifically for artport by the Whitney, documentation of net art and new media art
exhibitions at the Whitney, and new media art in the Museum's collection.
The transition from analogue to digital media is particularly challenging because some artworks depend
aesthetically and/or conceptually on analogue technologies. A change from analogue to digital may have
major consequences for the appearance, the functioning, and the historical value of artworks.
Digital works also influence the way of display them, to do it are needed curational strategies and
communication strategies (audience engagement). Museums and art institutions in general are adapting to
a changing society with developing new meanings of crucial notion such as place, community, and culture.
Art museums address crucial issues in the digital age, such as: authenticity, contemplation, discourse,
expertise, creativity, authority, the challenge that they are facing are concerned with the transformation of
their spaces from an early place-based cultural institution to a more dispersed (post)modern space.
The NEW MUSEOLOGY that began in the late twentieth century turned the focus of museums from
collections care to public programming, to integrate the new participatory culture (shift from passive
audience to active community). The digital age has spawned an open and participatory culture that is
accustomed to immediacy, visual enticement, less entry barriers, and an abundance of publicly available
information.
Digital in Visual Art Management, Communication, Engagement
Digital Media are public facing technologies that connect with the organization with the public/visitors. For
art organizations it means to have a digital plan and strategy, have a infrastructure that supports it,
contents to be posted and a traffic, a flow of interaction, all of this to do a better service.
Digital curation and digital contents are fundamental to arrive to the digital storytelling.
In order to provide for the best service is important the choice of the platforms:
- Platforms and Tools: Basic (website, social media…) and Experimental (Virtual Reality, 360, games,
smart objects)
- Service Design (centered on visitor experience)
- Design Thinking and Agile Model (collaborative vs departmental thinking: collaborative, user-
centered practices to move projects from the unknown to the known through short sprint cycles)
Digital departments became centralized teams responsible for developing and delivering digital practices
and functions within a sector and workplace culture that was overwhelmingly not digital. Whether it was
formally acknowledged or not, from the outset, these departments were agents of change; they were
leading the digital transformation of their non-digital institutions.
This was true at The Met, where the Digital Media Department, as it was initially named, was established in
November 2009. The new department brought together existing staff for whom digital was deemed the
distinguishing factor in their role, and who, up until then, had been part of Education, Web Group,
Information Systems and Technology, and Collections Management.
In many ways, this approach worked—both at The Met and at other cultural institutions. Websites were
built, collections were digitized, apps were launched, digital content was produced, social media accounts
multiplied, and Webby Awards were won. But today, as new technologies, opportunities, and challenges
emerge at an accelerating pace, and in an age where digital is critical to how cultural institutions build
relevance and participate in society, the strategy of centralizing responsibility for digital transformation
within a single, distinct department is reaching its limits. Now a broader institutional approach is needed.
The opposite may even be true: The highly centralized Digital Department at The Met created a tension
where digital was often perceived as the role of that "well-resourced" department, and not a shared
responsibility.
- Collaborate with stakeholders across the Museum to identify and advocate for changes that that
would better enable the Museum to harness the opportunities of digital;
- Deliver activities, projects, and programing specific to their team's functions.
While the three teams in The Met's Digital Department are stakeholders in one another's work, they are not
the only stakeholders; there are other departments across the Museum that are equally invested, whose
success is tied to the successful use of digital technologies. This is apparent at The Met, where digital-
related activities are already wider-spread than just the Digital Department:
- Imaging Department, and the development of a new program for advanced imagery of The Met
collection, including three-dimensional scanning and photogrammetry
- Merchandising and Retail, and their re-think of The Met Store online, its integration with the
institutional website, and shift towards a product-management mentality
- Department of Greek and Roman Art, and their dedication to digitizing and cataloguing the 13,000-
plus fragments of the Dietrich von Bothmer Collection
- Department of European Paintings, and their commitment to developing content-rich records for
the objects in the online collection
- Education Department, and the livestreaming, recording, and online presentation of their public-
facing events
- Marketing and External Relations, and the integration of digital channels—social media and email
—with traditional media channels to promote The Met
Each of these departments is involved in transforming their existing practices in response to the
opportunities presented by digital technologies, and each is completing work that is better housed within
their functional teams than within a centralized Digital Department.
Digital departments have therefore succeeded by providing a psychological umbrella (and sometimes a
fortress) under which teams can successfully steer their institutions through "digital discomfort." Under this
umbrella (or within that fortress), digital departments have incubated a team culture that enabled digital
transformation, where agile ways of working, failing forward, minimum viable products, iteration, launching
early and often—all characteristics of a successful digital operation—could be expressed. Digital
departments have created an undocumented subculture within the existing culture of their institution.
The incubation of this subculture is probably the most significant contribution digital departments have
made to their institutions; it needs also to be their most enduring. If the underlying success of a digital
department is delivering transformation, then as transformation is formally expanded across an institution,
that institution would do well to recognize the benefits of that subculture and proactively marry it—or a
derivative therein—to the cultural values of the institution. It is then incumbent upon institutional
leadership to support and protect that marriage of cultures, establish the success indicators for digital
transformation at an institutional level, and ensure the organization is focused on achieving that
transformation collaboratively.
TATE
Difference Machines
Some governments and corporations are creating databases that surveil and monetize our identities, as well
as biased algorithms that perpetuate discrimination in everything from education and healthcare to
employment and the justice system.
The forerunner of the computer was called a “difference engine,” as it was used to calculate the differences
between numbers. Today, we are surrounded by “difference machines,” or computers that are used to
encode the differences between us.
Difference machines presents the work of 17 contemporary artists who address the complex relationship
between the technologies we use and the identities we inhabit. collectively they ask some of the most
urgent questions we face today.
When the web first emerged in the 1990s many people imagined that it would allow us to escape our
bodies some celebrated the idea that attributes like our race ethnicity gender sexual orientation and
disability would become irrelevant but instead the opposite became true our offline and online selves have
fused.
Some governments and corporations are creating databases that surveil and monetize our identities as well
as biased algorithms that perpetuate discrimination in everything from education and health care to
employment and the justice system.
The artists in this exhibition explore our newly digitized identities by creatively adapting both familiar and
emerging technologies.
Many examine how digital systems contribute to the exclusion erasure and exploitation of marginalized
people others emphasize how digital tools can be repurposed to tell more inclusive stories or imagine new
ways of being.
The Albright-Knox Art Gallery’s exhibition "Difference Machines: Technology and Identity in
Contemporary Art" explores the intricate relationship between technology and identity. The entrance
features Zach Blas's photographs showcasing people wearing futuristic, facial-recognition-evading masks.
The exhibition encompasses seventeen artists and collectives, spanning three decades, examining how
emerging technologies shape collective identities. The introduction provides a theoretical and historical
context, discussing digital tools' empowering and inequity-amplifying roles.
The essay delves into (de)coding bias, tracing the historical link between computers and identity definition
since Charles Babbage's difference engine. It explores how digital art reflects the societal impact of
technology. The text examines digital tools' dual role in empowering marginalized communities and
perpetuating systemic inequities. It addresses the digital divide, digital redlining, and algorithmic
discrimination.
The historical context of digital art reveals a nuanced relationship with technology. Early artists critiqued
gender stereotypes through subversive interventions like (RT)mark's Barbie Liberation Organization. The
emergence of net art in the 1990s saw artists challenging the notion of leaving behind identities online. The
text highlights key artworks questioning racial, ethnic, and gender biases in technology.
Today's discourse on technology and identity resonates with the history of digital art, offering critical
perspectives on systemic biases. The essay emphasizes the urgent need to reevaluate the history of
contemporary art addressing technology and identity, given the increasing complexity of cyberspace's
relationship with difference.
The exhibition "Difference Machines: Technology and Identity in Contemporary Art" at the Albright-Knox Art
Gallery explores the profound impact of digital technologies on our understanding of identity. The
exhibition prompts reflection on how technology shapes perceptions of race, ethnicity, gender, sexual
orientation, and disability. Through various artworks, it questions the naturalness of identity categories,
emphasizing their societal construction, influenced by technology. The show highlights the potential for
both surveillance and erasure of marginalized communities through digital tools, addressing issues of
privacy and visibility. Some works emphasize the importance of reasserting control over technology,
exploring themes of Indigenous futurism and reclaiming identity. The artists engage with the dual nature of
technology, acknowledging its potential for both empowerment and perpetuation of inequities. The
exhibition urges viewers to reconsider the role of technology in building a more equitable future and invites
imaginative exploration of identity and technology in contemporary society.
When first reading this list of fanciful classifications, we might be amused that the natural divisions
between animals appear to have been misunderstood or disregarded. Upon further reflection, however,
we may begin to question whether such divisions are really natural at all. In any classification system, the
boundaries between categories are often debatable or fluid: historical time periods and musical genres are
notoriously hard to pin down, for example. In other words, Borges’s Chinese encyclopedia draws on a racist
caricature of Asian cultures as backwards to suggest the absurdity of all classifications. In addition to not
being natural, classifications also are not neutral: they necessarily reflect what a society thinks is worth
studying. In the West, modern encyclopedias first emerged during the European Enlightenment, when
colonization and the building of empires drove the desire to organize knowledge systematically. One of
Borges’s points is that such systems never transcend the cultures that produce them, despite their claims to
universality.
The exhibition Difference Machines: Technology and Identity in Contemporary Art explores how this is true
even in the case of the categories that describe our collective identities. We are all complex beings with real
physical differences from one another, but it is society that determines how these differences are named,
which is why identities often shift across space and time. Contemporary racial categories, for example, are
not natural divisions in the human species but social constructions. There are differences between groups of
people from different geographical locations across the world, but there is no stable group of observable or
genetic traits that can be used to distinguish “Black” from “white” or “Asian.” Genetically, there are more
differences within the so-called races than between them. While there is no scientific basis for race, this
does not mean that race—or racism—does not exist. Nearly a century ago, W.E.B. Du Bois wrote that
disregarding race because it is not based on fact does nothing to address its force in societies that have
been shaped by colonialism and slavery.[2] As theorist Paul Gilroy writes, we are not born into a race,
but racialized. Less a noun than a verb, race is a kind of technology for producing difference that has been
used to justify the treatment of enslaved and colonized peoples and to affirm national identity.[3]
Like race, gender is also a social construct. Sex and gender are often conflated, but one’s sex is typically
associated with one’s biology, while one’s gender is an identity—hence the varying genders described in
different cultures and time periods.[4] As Judith Butler has written, gender is not an essential or innate trait:
“gender is in no way a stable entity,” but rather “an identity instituted through a styled repetition of acts.”[5]
Every day, we perform our gender through the way we dress, speak, move, and act towards others, in ways
that are both deeply personal and conditioned by society. For example, cultural norms may dictate that it is
more “natural” for a woman to accept commands and offer assistance than a man—which is precisely why
the default voices of virtual assistants such as Siri and Alexa are feminine.
Similarly, while a preference for sexual partners is rooted in our biology, identities such as straight or gay are
shaped by society. The term homosexual, for example, was not invented until the late nineteenth century,
and its connotation rapidly shifted. Originally coined by an activist who was trying to fight laws that
criminalized sodomy, the term was soon pathologized as a mental disorder tied to other criminal acts, such
as pedophilia. In this sense, a label can be a condemnation that marginalizes certain behaviors. Over the
past twenty years, however, labels including Lesbian, Gay, Bisexual, Transgender, and Queer have been
embraced by many (though they are still eschewed by some, particularly outside Western frameworks).
Formerly a pejorative insult, the term queer is now often seen as a unifying, positive identity. Michael
Warner defines queerness as the opposite of heteronormativity, the beliefs and policies that position
heterosexuality as the default sexual orientation.[6] Queerness can even question the value of rigidly
categorizing sexuality, which historically has been the means used to police and control it.
Even the category of “disabled” is socially constructed. In her research on ableism, Fiona Kumari Campbell
describes how laws, institutions, technologies, and other social forces distinguish normal from pathological
and able-bodied from disabled identities.[7] As the word suggests, for a body to be described as dis-abled, it
must be compared to another, able body. People only become “disabled” when living in a society that is not
designed to accommodate how their particular body functions or performs work. Most disabled people
choose to describe themselves as disabled, rather than “differently abled,” to underscore that one is
actively “disabled” by social norms, just as one is “raced” and “gendered.” Still, as with race, gender, and
sexuality, this label has real-world consequences, exposing individuals to prejudice and discrimination.
Throughout the twentieth century, various marginalized communities (including African Americans and
LGBTQ+ and disabled people) organized to demand their civil rights. Today, the fight for social justice
continues, alongside calls to make mainstream politics and culture more inclusive. At the same time,
demands for more representation can wind up reinforcing labels instead of interrogating how they relate to
systems of power. In response, many people are now questioning the politics of representation, focusing
more on how our identities are defined, how their boundaries are policed, and how they relate to systemic
inequities. Some even argue that being required to conform to an identity in order to be “counted” is its
own kind of oppression. Furthermore, asserting our identities can make us vulnerable to discrimination or
tokenization, leading many activists to echo Michel Foucault’s assessment, in his work on the origins of
modern surveillance and social control, that “visibility is a trap.”[8]
Although our collective identities can be a liability, some argue that they also can be a source of
empowerment. In the 1980s, Gayatri Chakravorty Spivak used the term “strategic essentialism” to describe
how marginalized people can temporarily organize themselves on the basis of shared identities—even if
those identities are not really “essential” or intrinsic.[9] To cite one example, people with “invisible”
disabilities who embrace the term “disabled” strategically challenge the idea that being disabled must be
bad or shameful. Historically speaking, our identities also have been the basis for the shared cultural
traditions that give meaning to many of our lives, from musical subcultures to holidays. In this way,
identities can be sources of joy and pride, too.
“In this light, perhaps one has to redefine the value of the image, or, more precisely, to create a new
perspective for it. Apart from resolution and exchange value, one might imagine another form of value
defined by velocity, intensity, and spread.
Poor images are poor because they are heavily compressed and travel quickly. They lose matter and gain
speed. But they also express a condition of dematerialization, shared not only with the legacy of conceptual
art but above all with contemporary modes of semiotic production.”
The concept of the "poor image" is explored as a dynamic, degraded copy in constant motion,
characterized by low quality and deteriorating resolution as it circulates. It exists as a transient entity— a
ghostly representation, a preview, a thumbnail, freely distributed through digital channels. This proletarian
in the hierarchy of appearances undergoes transformations, transitioning from films to clips and from
contemplation to distraction. Liberated from cinema vaults, the poor image faces digital uncertainties,
sacrificing its own substance for accessibility.
The poor image is an unauthorized descendant of an original, often challenging patrimony, national culture,
or copyright. With deliberately misspelled filenames, it serves as a lure, decoy, index, or reminder of its
former self, mocking digital technology's promises. Testifying to the violence of image dislocation within
audiovisual capitalism, poor images become the contemporary debris of audiovisual production—evidence
of rapid circulation in the global digital economy. As commodities or effigies, they convey pleasure, threats,
conspiracy theories, or resistance, showcasing the remarkable, the obvious, and the enigmatic amidst the
challenges of deciphering their degraded forms.
This chapter explores the hierarchy of images, emphasizing that the value of images is not solely based on
sharpness but also on resolution. It discusses the class society of images and how cinema, as a flagship
store, markets high-end products, while poor images circulate as more affordable derivatives. The text
highlights the impact of neoliberal policies on experimental cinema, resulting in the marginalization and
disappearance of certain visual works. It introduces the concept of "poor images" characterized by low
resolution and their role in challenging traditional hierarchies.
Examining the consequences of insisting on rich images, this chapter discusses the voluntary invisibility of
images based on aesthetic premises and the general equivalent related to neoliberal policies. It traces the
disappearance of experimental cinema from cinemas and the public sphere due to the prohibitive costs of
circulation. The chapter also delves into the post-socialist and postcolonial restructuring of nation states and
film archives. It highlights the emergence of poor images as a resurrection of marginalized content,
circulating through online platforms and reconnecting dispersed global audiences.
This chapter explores the significance of rare prints reappearing as poor images, revealing the conditions of
their marginalization and the social forces leading to their online circulation. It discusses the connections
between neoliberal restructuring, digital technology, and the privatization of media production, leading to
the disappearance of non-commercial imagery from public view. The chapter emphasizes the role of poor
images as products of disorganized privatization and traces their circulation within networks left void by
state-cinema organizations.
Drawing parallels with Juan García Espinosa's concept of imperfect cinema, this chapter discusses the
ambivalence and affective nature of contemporary imperfect cinema. It explores how the economy of poor
images corresponds to the description of imperfect cinema, enabling wider participation in image
production. The text acknowledges the dual nature of these opportunities, both for progressive and
regressive ends. It delves into the contradictions within digital communication networks and the
privatization of intellectual content, shaping the circulation and production of poor images.
This chapter positions the circulation of poor images as creating "visual bonds" akin to Dziga Vertov's vision
of linking the workers of the world through visual language. It explores the paradoxical role of poor images,
constructing global networks while being battlegrounds for commercial and national agendas. The chapter
discusses the disruptive potential of poor images, initiating a new chapter in the historical genealogy of
nonconformist information circuits. It envisions poor images as contributing to the reactualization of
historical ideas associated with these circuits.
Focusing on the afterlife of cinema and video art masterpieces, this chapter discusses the return of these
works as poor images in the digital realm. It challenges the notion of the "real thing" and emphasizes the
poor image's real conditions of existence, including swarm circulation, digital dispersion, and flexible
temporalities. The chapter concludes by noting the poor image's symbolic value as a representation of
defiance, appropriation, and the evolving landscape of visual representation in contemporary culture.
- Fluid Textuality: the mutability of texts in variants and versions, whether produced through
authorial changes, editing, transcription, translation, or print production (in a fundamental sense,
texts have always been fluid and modular)
- The advent of word processing drew intensified attention to this aspect of textuality: writers were
thrilled with the experiences of cutting and pasting whole portions of the texts without retyping.
- The possibility of transforming a work by changing its format and typographic font with the strokes
of a few keyboard commands excited critical and creative imaginations
- When it first appeared, hypertext was a foreign and intriguing concept, with nodes, links, and
forking paths structured to create a multifaceted text in ways that had been tried in print formats
but that took on an aura of novelty and promise in new media
Notion of “augmented edition”: digital environments provide the ability to pull together many versions of a
single work, tracking its development, noting its variants, and presenting the whole comparative array of
witnesses.
1. The use of structured and/or tagged approaches to identify persons, themes, places, or features of
a text provides a way to maximize the intellectual investigation of documents and to display these
interpretations
2. As standards for mark-up (the tagging process used in transcriptions) have extended and improved,
many nuances in textual analysis have become part of the set of interpretive elements. Not only can
we identify what something is, but we can characterize its relation to other elements or entities
(part of, derived from, a cousin of, a version of, and so forth)
Pivotal audience engagement project for theatre and performing arts: a network of 9 theatres and 3
universities across 9 different European countries, interested in audience development through the active
involvement of spectators, who become co-creators.