Interview to Brecht Declercq. By Juan Alonso
Brecht Declercq is the current President of FIAT/IFTA and Digitisation and Acquisition Manager at meemoo, the Flemish Institute for Archives. He worked as a radio archivist for the Flemish public broadcaster VRT from 2004 until 2008 and as a policy researcher for FARO in 2008. From 2008 until 2013, he was in charge of the digitisation of VRT’s radio archives. Since June 2013 he is responsible for the digitisation of the Flemish audiovisual heritage preserved at Flemish broadcasters, cultural heritage institutions, city archives, government agencies and performing arts organisations. Also the acquisition of born digital heritage collections is part of his responsibilities.
In his 15 years’ experience, he’s built a solid list of scientific and popularizing articles and served globally as a curator, speaker, guest lecturer, advisor, jury and expert panel member. (Photo: © meemoo)
We have already experienced the transition from analogue television to digital television, and we have seen massive digitizations projects taking place in the television archives. Now, it seems that we are entering a new phase revolving around the intelligent exploitation of digital data. Large digitization projects are still being carried out in parallel, and in a more sophisticated way that in the past. In the social imaginary, digitization is seen as a necessary and simple process, but is it so? What aspects do you consider when carrying out digitization projects in meemoo?
Necessary for sure, but simple… not in the least I think. After all, every carrier format is different and has its characteristics, which should also be respected during the digitization process.
As those who follow meemoo’s activities certainly will know, our digitization projects are a bit different from those of many other organizations. After all, meemoo does not have a collection of its own, but coordinates the digitization of (among other kinds of heritage) audiovisual media of some 165 Flemish broadcasters, libraries, archives, museums, government bodies and performing arts organisations. It might lead us too far to explain here the full course of a typical digitization project of meemoo, but what I personally find very important is that a digitization project is not only approached from a technical point of view. Of course, that technical aspect is very important, but I also attach great importance to efficient design of the process and good organisation, in transport and insurance for example, or the reporting. Human factors are equally important. Bad communication between all parties involved may have very negative consequences, while a good atmosphere ensures that people are willing to go the extra mile. Meemoo always works with external digitization services and I think it is very important that to make good agreements with such companies about the goal, timing and budget of digitization projects.
For my team and my students I often use the title of a Joe Jackson song: “You can’t get what you want, till you know what you want.” During informal conversations I sometimes hear about projects that went the wrong way. I then wonder if everyone involved really knew what they wanted. Unfortunately it still happens quite often that digitization projects are started without sufficient and thorough thinking about why archives are being digitised, what the goal actually is. Still many answers to real life questions are rooted in the answer to that single, fundamental question.
Considering the costs of digitization and digital preservation, do you think that it would not be more ethical to choose to create various digitization profiles that also include lossy formats/codecs as a balanced compromise between digital preservation needs and disk space management?
I find this a very tricky question, because it could bring ethical considerations as an archive manager and historian into conflict with ethical considerations as a citizen. I think the trick is to balance the two, or achieve one without sacrificing the other. Disk space management (and the environmental and climate related aspects of digital mass storage behind that) are without a doubt an important argument. But the question is whether we can also allow that argument to play a role without giving in to important technical aspects such as the losslessness of formats and codecs. I don’t think the opportunities in that area have been fully exploited yet. Consider, for example, the energy consumption of spinning disks, which is significantly lower if one can switch to solid state storage or linear data tape. Each digital carrier type has its advantages and disadvantages in that regard. I am also convinced that opting for a lossy codec does not necessarily lead to a much cheaper digitization process. Unless one chooses a proprietary format that requires expensive encoders and licenses, the desired output format only determines a small part of the digitization costs, any digitization company will confirm that. It is especially with regard to digital preservation that the costs become higher if one opts for a more data-intensive output format and higher specifications based on resolution, bit depth and the like.
I’d like to add two further considerations: firstly, I would really think it would be a shame if we let go of the quality requirements for which we have fought for so long and which still haven’t been achieved in some archives. I don’t know if it’s a good idea to self-censor, to limit ourselves as archivists. After all, there are many powers above what I would call the archival system, making decisions about our functioning and imposing their restrictions. It is up to us to defend heritage interests in society, to others to defend their interests, and to politicians to make a choice. Nor should we underestimate our role as archivists. I share the plea of my compatriot Tom De Smet of the Dutch National Archives when he stresses the particularly important role of archives in a democratic society. We ensure that the audiovisual documentation about social life created by others is preserved. The mere fact that this work sometimes makes it possible to hold people accountable for crimes against humanity – I am referring, for example, to the collection of cameraman and video archivist Max Stahl in Timor Leste – actually makes our task indispensable. The choice of a sustainable codec and container during a digitization project is nothing more than the implementation of a technical choice that is justified by that great importance.
Secondly, all of the above is of course written from the perspective of an audiovisual archive manager in a wealthy country in Western Europe. I have always been in favor of a certain level of pragmatics. If insisting on all kinds of very high technical standards means that a less financially privileged audiovisual archive cannot embark on a digitization project, then I’ll be the first to advise that archive to lower the technical standards in a well-considered way. I have had the opportunity to visit audiovisual archives in less financially privileged parts of the world as well. During those visits I became convinced that much more valuable, preservation-worthy content has already been lost because it could not be digitized, than because of a lossy codec.
One of your responsibilities in meemoo is the acquisition of born-digital video collections. One of the greatest challenges is how to manage born-digital videos, and we are all in a sort of trial process right now, learning from each other. Recently, considering the number of codecs and formats, one of the professional approaches that is being shared is having a realistic point of view, with procedures that are not always the best practices, but are pragmatic and “good enough”. Julia Kim, Rebecca Fraimow and Erica Titkemeyer published an interesting paper in 2019 titled “Never Best Practices: Born-Digital Audiovisual Preservation” where they affirmed that “‘Best practice’ is hard enough to determine and follow when you have control over the creation of the media, but even more challenging when accessioning files created by donors, producers, and folklorists that never followed ‘better’ practices to begin with”[i]. What do you think?
First of all, thank you for pointing me to this article again. Several good articles and presentations have put this theme on the map in recent years, but this article – in addition to a presentation by Etienne Marchand about the approach to the intake of born digital audiovisual content at INA at the FIAT/IFTA World Conference in 2017 – I found perhaps the most interesting, because it puts forward the most concrete solutions, while still being very honest about the challenges.
I currently see roughly two approaches to these digital intake processes. The first one is what I would call “normalization at the entry” following Titkemeyer’s approach at the University of North Carolina. I don’t think it’s a bad approach, it’s perhaps the best there is at the moment… but it is also a technically very complex one, requiring specific expertise. But above all it can be very labour-intensive. That’s why the question for me is whether it is a realistic approach for the intake of large collections.
Last year, apart from the archive material of the Flemish broadcasters, we ingested almost 100,000 born digital files into meemoo’s digital archive. We even expect a large increase in this in the coming years. With such volumes, I don’t know if the “normalization at the entry’ approach would be feasible for us. We opt for a second approach, which I would like to call “manage the multitude”. This means that we will of course perform a number of automatic checks at the entrance to the archive, but we will ingest the files as they are. In doing so, we ensure a good characterization, while increasing our efforts in monitoring both of what we have in the archive and of what is happening in the world of formats and codecs (“preservation watch”), so that we can intervene in a structural way if problems should arise with the digital preservation of certain formats or codecs.
We do, however, steer our content partners in the direction of digitally sustainable choices as much as possible, for example when they have to determine which version they want to keep as an archive master. We’re even committed to the adoption of sustainability criteria by the content creators, during the production phase. For example, we want to offer to Flemish theatre companies a template contract for camera recording services, which clearly stipulates that the recordings must be made or at least delivered in digitally sustainable formats.
Finally, and there I’d like to echo Kim, Fraimow and Titkemeyer, that we are still a long way from a definitive solution, if one ever comes. I believe this issue will become one of the most important in audiovisual preservation in the coming years. If it wouldn’t already be done by others, I certainly want to contribute to keeping this high on the agenda.
In 2019 the EBU Newsroom Report stated “You will often hear that good metadata is needed to succeed in such a task and content needs to be tagged correctly in the archives, otherwise the machine will miss it. But this is not an absolute requirement anymore, as metadata can be generated automatically, and machines are becoming so sophisticated that they are able to detect relevant content even when the metadata is missing.” At that time, you found that statement overly optimistic. What is your opinion on the evolution of “independence” of AI? Do you think this statement is still idealistic? How is automatic metadata generation affecting the work of audio-visual archivists today?
I still find that statement at least very optimistic and definitely simplistic. In the context of that report, that statement was made to argue that it was possible to have a computer compile a full program of archive clips by means of an algorithm. Of course that is possible, just as it is possible to replace a journalist with a computer and write a news story, or replace a cook with a robot and prepare a meal. But the question for me is whether that annihilates the added value of a human archivist – or in this case also that of an archive researcher. I still answer that question negatively without a shadow of doubt. Of course there are archival queries where it doesn’t really matter which archive fragments are shown, as long as they vaguely meet the search criteria. It is certain that in such a simple and low-quality scenario the traditional role of the documentalist and the archive researcher can disappear in the short term. With audiovisual archives, we thus end up in a world in which archival search and retrieval tasks look like those on Google: because the algorithms have assigned so many keywords to so many archive clips, one will always find a fragment that may fit the purpose. Even if it’s not the best fit, the producer and the end user don’t care much about that, because they don’t know that something better, more beautiful, more creative could have been used.
But there is also another kind of query, namely the kind in which one knows in advance which fragment one is looking for, and one intends to use only that particular fragment, because it is realized that that one fragment, for example, can have a special, original, creative or even humorous effect that all the others are lacking. That’s where human annotation, archival research and curation can still add value. I am optimistic by nature and I have even heard the comment that I have too much faith in the capabilities of AI algorithms for archival description. But if there’s one thing I’m a little afraid of in this area, it’s that people would no longer appreciate this human creativity. That would be like a world in which computer-controlled soup kitchens replace Michelin-starred restaurants. In any case, we as archivists have the task of continuing to demonstrate this human added value.
In addition I would also like to emphasize – and this is not a new plea, because the FIAT/IFTA Media Management Commission, among others, has been repeating this since its Hilversum seminar in 2013 – that an audiovisual archive completely described by algorithms must also be controlled by people. It will be people who decide which algorithm can best be applied to which content, or which combinations of description techniques are the best choice for a particular collection, for example because they realize where the biases are. It will be people who do the quality checks on the results, sometimes even correcting the algorithms where they make mistakes. And it will be people who will continue to guide archival traffic in the right direction. Theo Mäusli – who worked in the RSI archive, where speech-to-text has been used in daily archive practice since 2011 – summarized these new tasks in 2015 as “the audiovisual archivist as a coach.” I think that’s an interesting metaphor.
There is a lot of talk about AI and automatic metadata, but perhaps less about standardization and ethics in the implementation and development of AI services. What is your opinion in this regard? Which experiences can you share?
The ethical questions regarding the use of AI algorithms in audiovisual archives are still relatively recent, but certainly justified. I like the 360° perspective that, for example, my colleague Bart Magnus shows when he assesses those ethical questions. In the steering group that supervises the implementation of description algorithms at meemoo, he points out not only the biases of certain algorithms, but also privacy issues, or even the working conditions of the documentalists who create reference sets or ground truth data.
I find it particularly fascinating to see how people with a high-tech background start working together on these kinds of issues with people with a philosophical, historical or linguistic education. One aspect that I find particularly interesting is whether big tech companies should be allowed to use public broadcast archives to train their algorithms. Virginia Bazán-Gil, who leads the archive innovation projects at RTVE in Spain has done very interesting work on this issue. On the one hand it can be said that the archive material belongs to the public and should not be used specifically to make big tech companies even wealthier than they already are. But on the other hand, we must also realize that it is a unique opportunity to influence these algorithms – which will influence our future anyway – with the values inherent in public broadcasting content. If we take this into account, I would prefer that Google’s image recognition algorithm be trained on the basis of the archive images and archive metadata of a public broadcaster from a democracy, rather than with images and metadata from a dictatorship.
In addition, I also think that we should not be naive: just like we no longer agree today with some views from the past that are implicitly or explicitly present in our audiovisual archives (in the image, in the sound, but also in the metadata), in a few decades we will also be frowning, or even feel disgusted, with certain views of today. It is part of human history that the public opinion changes over time. Since algorithms are a product of human action, trained on image, sound and metadata created by humans, the biases present in them will sooner or later be considered ethically unacceptable. Perhaps the best algorithms will prove to be those that evolve with the public opinion of the time and place in which they are used.
Sometimes, even by professionals in the sector, the software is seen as a solution to the audio-visual archive itself. We are frequently asked, “please, recommend a software to manage the audio-visual collection” without much information. How would you answer this generic question? To what extent is software important to carry out a professional work?
My generic answer would be that unfortunately, there are no one-fits-all solutions. Archives considering to buy any kind of software should always maintain a critical attitude vis-à-vis the strong marketing discourse used by the sales departments of software companies. That said, software solutions are of course indispensable in today’s audiovisual archive world. Provided we are aware of each other’s strengths and weaknesses, I also think that we should consider the software industry, including the communities that maintain open source software, as partners in meeting the challenges posed by digital audiovisual archive management. It is important that software developers have a good eye for what their customers actually want, but also that the archives are – to some extent – willing to adapt their existing practices to what software can deliver today. Sometimes our use cases are a little less specific than we think, or we cling too much to past practices. But archivists should always be careful not to confuse conservation with being conservative.
In general, I have seen two major evolutions in software development for audiovisual archives over the past few years: first, we are moving away from the monolithic blocks that MAM systems used to be. These systems are becoming more and more modular, allowing all kinds of tasks to be performed via third party integrations. I think that’s a good evolution, simply because technological developments behind the software can have different speeds. A few years ago the FIAT/IFTA Media Management Commission surveyed how long MAM systems typically remain in use and how long it takes to replace them. That time can be up to four years easily, which is certainly out of proportion vis-à-vis the current speed of technological developments. That’s why I see more benefit in systems that are developed on the basis of CI/CD principles (continuous integration, continuous development).
Another evolution – which has also been going on for a number of years and partly corresponds to the previous one – is the evolution away from the traditional licensing model of large software packages, in favor of open source software, but sometimes also in favor of in-house development. I refer explicitly to the MAM systems of NRK in Norway and ABC in Australia. Even large media companies – where scepticism towards open source software has traditionally been high – are rapidly adopting open-source solutions in recent years. An open-source encoder such as ffmpeg for example is integrated in many current software solutions. Some archives will use it without really knowing. However, open-source applications are still mainly used for small tasks, the so-called micro-services, as a part of a larger workflow. I don’t think that a fully-fledged, open source MAM system for large audiovisual archives such as those of broadcasters is around the corner.
In any case I consider it useful to consider the impact of these developments on the role of the audiovisual archivist here. A skill that I find really valuable for future audiovisual archivists is being able to build bridges between the daily needs of the archivists with the possibilities that software can offer. The archivist will have to be more and more aware of the evolution of the software market and the solutions that may or may not be available. All too often I still see tasks being solved manually for which automated solutions are now available.
As archivists, we must give access and disseminate our collections, but sometimes we are confused in terms of access and publication, because it is impossible to identify an author and copyright law may pose some issues. Do you think we need changes in the audio-visual rights management regulations that are more suited to digital environments and to the missions of non-profit institutions?
In principle, I completely agree with your statement. But we should also be aware that copyright was not invented to bully audiovisual archivists. Moving images, sound and photos, the content with which audiovisual archives work, are among the most popular content circulated in our society. It makes sense that those who create this content build a business model around it. But as with any economic model, there are also excesses here, whereby earning money is no longer in fair proportion to the creative merit of the maker, which is sometimes many decades behind us. I can’t help finding it a pity that the efforts made by the taxpayer to preserve cultural heritage content cannot be fully valorised with a return on society as a whole, because rights holders block this, for example because they consider scientific reuse as a potential new source of income. In the European Union the latest changes to the directives in this area are a good evolution, although it remains to be seen how the member states will implement them exactly. But outside Europe, for example in South Africa, I sometimes still see distressing situations, where copyright law severely hinders the preservation of the content, for example through digitization, by heritage institutions. Let this be a plea to put an end to this quickly, because neither the heritage manager, nor society as a whole benefits from this, not even the rights holder, because what are copyrights worth if the content itself is gone?
Within FIAT/IFTA we sometimes find ourselves caught in the crossfire, because notwithstanding the general plea for more openness and easier access to archival collections, for some archives of public broadcasters the sale of archive content for commercial reuse is indeed an important, even indispensable source of income. For me, however, it’s not so important to preserve that business model per se, but rather that sufficient financing of these broadcasting archives remains guaranteed. In other words, if the government – as is the case in Switzerland for example – requires the public broadcaster by law to make its archive content available to commercial competition for a very low fee – the resulting loss of income suffered by the public broadcaster should be compensated in some other way. Only in this way can the proper functioning of these broadcasting archives and the important return they deliver to society continue to be guaranteed.
Currently, we are seeing a large amount of audio-visual material being produced by society and being published on social networks over which we have no control. This material, precisely due to its large number of data, is very valuable for picturing reality. The fact that most of it is produced spontaneously, without the intention of being a historical document, makes it also less partial than a classic oral history program, for instance. All of this occurs in a scenario in which we participate without realizing that we are generating evidence through our daily activity. We are not aware of it, because we do not control the “means of production”. Therefore, we cannot exploit that data, which most of the time is ephemeral. In this context, it seems that historians could change their role from unique authors of historical narratives to data mining specialists, who need specific software, AI algorithms, datasets, among others. The same can happen with the archivists. Initiatives such as Witness or Documenting the Now have emerged trying to provide tools to undertake massive data exploitation projects on these platforms. What are your thoughts?
I do not fully agree that we do not control the means of production ourselves. For an important part they are in our hands, literally even when we think of our smartphone. For example, data and privacy protection legislation in the EU is being developed further, so that we can choose more consciously what we share and what we don’t. But I also see ever more tools being developed that allow to monitor or even scrape the content of social media platforms, so that it can be preserved. Perhaps the historians of the future will remain the data mining specialists they have literally always been, but this time with much more powerful digital tools, enabling insights that would be impossible without the enormous computing power that exists today. In that sense, I’m a big fan of the work of Lev Manovich for example, who uses both historical and contemporary data to digitally demonstrate evolutions that simply couldn’t be imagined, let alone shown, in the era of old-school, manual historiography.
The title of the FIAT/IFTA World Conference 2022 is “Archive out of the box!”. Why this title? How is this related to the current evolution of television/ audio-visual archives?
Finding a good title for the FIAT/IFTA World Conference 2022 is always a challenge, as our ambition is to align with the challenges that the global audiovisual archive community faces, wherever they are on the timeline of technological evolution.
This year we particularly focused on the fact that our organizations are constantly transforming. As a result, archival content management has become a strategic focus that can be addressed from different perspectives within a media organization or by its external stakeholders. Thus, parts of the archivist’s role are starting to get linked to other emergent areas (e.g. fact checking, curation of OTT platforms, serving new target groups, …). Far from seeing this as a threat, we believe that archives should take it as an opportunity to do things in a more creative, collaborative and a democratic way and be part of the decision making process instead of undergoing it.
‘Out of the box’ therefore refers not only literally to the traveling of archive material outside of the archive space, but also more figuratively to the liberation from clichés such as ‘old’ and ‘irrelevant’. It even refers to the emancipation of the archivist, who becomes a full partner in the decision-making process of media organizations, or – in the case of audiovisual archives outside broadcasting – an appreciated curator of an online offer occupying an ever stronger position within the broad landscape of online content.
Today, what are the major threats to audio-visual archives? Which are the greatest opportunities?
When it comes to threats, what Mike Casey calls degralescence (the combination of degradation of the carriers and obsolescence of technology) is still in my opinion the most pressing issue. It is true that in the wealthy parts of the world financing models, project-based approaches and technological solutions are available to effectively combat this two-headed monster. But if we take a broader, more global perspective, we can’t help but recognize that we are losing the battle. Take for example a country like Brazil, which, although it went through particularly hard times recently, is also not the poorest in the world. At the moment, the content of hundreds, perhaps thousands of 2” open reel video tapes is in danger of being lost there, because there are hardly any properly working devices left in all of South America. We’re talking about tapes that were recorded in the 1960s, thus documenting a hugely important period in Brazilian history. It means that degralescence does not strike overnight. It is an insidious poison that strikes the poorer parts of the world first. The fact that much more audiovisual heritage can be preserved in the rich part of the world than elsewhere threatens to confirm global imbalances. Poorer countries are simply in danger of being deprived of the opportunity to come into contact with their own history of the second half of the twentieth century.
So let’s not be naïve, thinking that degralescence is conquered. On the contrary I would say: the next few years are crucial. Are we able to devise widely applicable solutions? It’s not anymore about lending out a few devices or supporting a local project. What we need is the instigation of a broad sense of urgency, both at the highest political and at grassroots levels.
As the biggest opportunity – and now I’m looking at things at a very high level – I would like to mention the attractiveness of the audiovisual heritage. Our current society is so permeated by audiovisual media that it is not very difficult to show fellow citizens its importance, not only today but also in the past. It has happened to me several times when I mentioned that I was working in a media archive, that the eyes of my audience started to sparkle, when I talked about the content we preserved. An enormous imaginative power emanates from images and sounds. It is up to us, archivists, to convert that interest among our audience into leverage for our archive and its contents. In this regard, I would like to refer to meemoo’s education platform ‘Archives for Education’. The fact that our children’s means of communication today largely consist of moving images and sounds, makes it particularly interesting to use audiovisual archive materials in an educational context. It’s a tremendous help in establishing the connection with their world. I think that such educational platforms with archival content are a wonderful way to generate real impact in people’s daily lives with an audiovisual collection.
Finally, what would you say to a young student who would like to specialize in the audio-visual archives field? Do you think specialization is necessary? Do you think multidisciplinarity is interesting?
There is of course always something to be said for specialization in a certain domain, where a scarcity is expected – for example in the programming of micro-services. But looking at things from my own daily working context again, I believe in the combination of typical digital skills, with attitudes that no one masters as well as those who have studied in the field of the broad humanities. Several of my own team members have a humanities background such as history or linguistics, but have added an additional year in digital humanities, artificial intelligence or the like. This ensures that they’re well aware of the importance of archives, including the ability of broad, critical reflection on their position, but they’re equally able to use and even to program at least a few digital tools to manage audiovisual archive collections. They have no fear at all of expanding that digital toolset according to the needs of the moment. Finally, I don’t want to leave unmentioned the importance of project management. A great deal of the functioning of an audiovisual archive is nowadays tackled on a project basis. Then it is very useful if you master one or more project management methods.
There are, of course, the specialized master’s programs in audiovisual archive management, such as those in Amsterdam, Paris, New York and elsewhere. But I’m afraid that those are only for people who know exactly what they want, who are extremely passionate and – in some cases – also have the means to fund it. I have regularly supervised trainees from these courses and I enjoy their open-mindedness, their specific expertise and their international outlook. But at the same time, working in an audiovisual archive is definitely not impossible without such a master degree. Let us also not forget that many audiovisual archivists receive their actual training on the job. I am no exception to that rule myself. That is precisely why I am such a fan of INA’s FRAME course, which we as FIAT/IFTA have been proud to support from day 1, because it makes it possible to structure the knowledge acquired on a day to day basis, and fill in some gaps.
[i] Julia Kim, Rebecca Fraimow and Erica Titkemeyer “Never Best Practices: Born-Digital Audiovisual Preservation” Code4Lib Journal. Issue 43, 2019-02-14