WHAT MAKES LEXIKAT DIFFERENT?
HOW DOES IT WORK?
Researchers using IT to analyse text have used topic-modelling algorithms. Currently, the main model used is one called Latent Dirichlet Analysis, or LDA. This looks at the proximity between words in the document being analysed. You can see an example using Charles Dickens' A Christmas Carol here.
It took one of our researchers a day and a half to write the code to do this and run the analysis. The model is static: you can't change the results for a custom analysis.
Currently researchers doing text analysis use mathematical formulae based on word proximities. The principal method used is Latent Dirichlet allocation, or LDA. Lexikat is different. Instead it compares the themes in your document with concept maps crowdsourced via web search results. Our system uses the internet as a giant categorisation machine, allowing it to produce much more human-like results than alternative systems.
You can see the difference in the example projects below, made using two Wikipedia articles: those on Bruce Willis and Wyatt Earp. The LDA model took our researcher around a day and a half to code and test. The Lexikat word clouds took just three seconds to generate via the website. What's more, while the LDA model is static, the Lexikat results can be edited by the user. The system will remember your changes for you, meaning that you can customise your analysis however you like.