How to evaluate a #dh tool?

Edit: I added some more embedded VoyantTools view.

I am currently holding a seminar on “DH: tools, methods, epistemology” (Humanités numériques: outils, méthodes, épistémologie [in French]). Last week, we received Pierre Ratinaud who is developping the excellent text analysis tool IRaMuTeQ (a python-based visual interface to R). Next week, we’ll go a bit further on the text analysis road. We’ll try to use a corpus made of speeches from Roy Jenkins as head of the European Commission (1977-1980).

The aim will be to analyse this corpus with three different tools: IRaMuTeQ / Paper Machines / Voyant Tools in order to understand differencies and similitudes between those three tools and to emphasize how important it is to choose a tool, and how decisive it is in obtaining research results. Choosing a digital tool is fully part of your research methods.

So I created this Roy Jenkins corpus (whose quality is questionable, but it’s for an exercise, not for true research), 74 speeches, in three forms:

Those three tools are not always comparable, in the sense that they don’t have always similar functionalities (a part from wordclouds, the less usefull and most questionable dataviz).

A short evaluation of dh tools check-list

So here is an attempt to elaborate a check-list when a researcher wants to evaluate a digital tool:

Gathering information

  • Is there a website about this tool?
  • Did you collect and summarize information on this website?
  • Is there research-articles about the tool? Did you read them?
  • Does the information gathered (websites + articles) show that the tool fits your research’s goals?

Maturity/stability of the tool

  • Did you find a roadmap for the tool?
  • If yes, is it far from its first version (the version that will have all the functionalities the tool is scheduled to have)?
  • Is it stable? (which means: trying the tool and/or going to forums / discussions lists to see what its users are telling about the tool)

Sustainability of the tool

  • Since when does this toll exist?
  • Is there a strong community supporting it?
  • Is it open-source? (so the algorithm can be easily evaluated)
  • Can you export your data and your results? What kind of export?

Sustainability of your research

  • Well… did you understand how the tool works (including its algorithm)? (see first part of the check-list)
  • Will the tool allow your research’s results to be verifiable and reproducible?

That’s what we will asking ourselves next wednesday with the three tools. Meanwhile, here are some uncommented results, for comparison.

Results

Voyant Tools





Paper Machines

IRaMuTeQ

Dendogram (classification)

dendrogramme_1

Similitude analysis

graph_simi_4

Wordcloud

nuage_1

Comparison

What I feel when I compare those results of a text analysis with three different tools (a part from the bad quality of the OCR of some documents):

  • Design is at the heart of interpretation: the way those results are displayed is influencing the interpretation of the data. It’s outrageous in the case of wordcloud – and that’s a reason why wordclouds are so questionable;
  • Those three tools are rather complementary. It seems to be obvious, but it is really important not to use only one tool;
  • It is easier to get back to the text and to alternate distant and close readings of your sources with IRaMuTeQ especially and Voyant Tools (a bit less than with IRaMuTeQ). It is sometimes possible with Paper Machines, but not that much. For a historian like me, it’s just strategic to be able a multi-scale reading of those sources.

OpenEdition vous propose de citer ce billet de la manière suivante :
Frédéric Clavert (10 décembre 2014). How to evaluate a #dh tool? L'histoire contemporaine à l'ère numérique. Consulté le 4 octobre 2024 à l’adresse https://doi.org/10.58079/plvm


4 réflexions sur « How to evaluate a #dh tool? »

  1. Ping : Avisos de Levante

  2. elifsu

    Pendant l’atelier (Text Analysis with online Tools) avec Susan Schreibman à l’école d’été DH de Berne en 2013, j’ai pris des notes ici (https://docs.google.com/document/d/1oQVt1w0OEHdBAMkvhW0lj-7opDGCnFi5xVB88-lGRaA/edit?usp=sharing ), en montrant des captures pour chaque outil….Vous allez voir, c’est hyper rudimentaire, mais j’aurais été contente que d’autres fassent mieux (personne n’a rien mis en commun/en ligne…)…

    Chaque année je demande à mes étudiants de faire le même travail (en mieux) avec divers outils, mais j’ai pas encore trouvé une façon de les publier sur notre blog sans que ça fasse 10 catalogues des mêmes outils..Toute idée est la bienvenue !

    Répondre
  3. @seeksanusername

    Merci pour l’article.
    J’ai toujours trouvé utile les billets qui, avec le même jeux de données, exploitent différents outils pour faire ressortir les + et les – de chaque outil. (même si l’exhaustivité est impossible).
    Par contre, pour VoyantTools, le seul embed utilisé ne lui rend pas justice face aux autres outils évoqués, car il possède bien d’autre tools et skin intégrés permettant d’avoir des analyses plus proches d’IRaMuTeQ ou de PaperMachine. A titre d’exemple, le Bubblelines (http://goo.gl/Va0LnM), le ScatterPlot (http://goo.gl/SCKORd), le clustering (http://goo.gl/184e17), etc.
    Antoine

    Répondre

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.