Topic Modelling for Humans . Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Target audience is the natural language processing (NLP) and information retrieval (IR) community.
Gensim is a Python library for topic modelling, document indexing
and similarity retrieval with large corpora. Target audience is the
natural language processing (NLP) and information retrieval (IR)
community.
If this feature list left you scratching your head, you can first read
more about the Vector Space Model and unsupervised document analysis
on Wikipedia.
This software depends on NumPy and Scipy, two Python packages for
scientific computing. You must have them installed prior to installing
gensim.
It is also recommended you install a fast BLAS library before installing
NumPy. This is optional, but using an optimized BLAS such as MKL, ATLAS or
OpenBLAS is known to improve performance by as much as an order of
magnitude. On OSX, NumPy picks up its vecLib BLAS automatically,
so you don’t need to do anything special.
Install the latest version of gensim:
pip install --upgrade gensim
Or, if you have instead downloaded and unzipped the source tar.gz
package:
python setup.py install
For alternative modes of installation, see the documentation.
Gensim is being continuously tested under all
supported Python versions.
Support for Python 2.7 was dropped in gensim 4.0.0 – install gensim 3.8.3 if you must use Python 2.7.
Many scientific algorithms can be expressed in terms of large matrix
operations (see the BLAS note above). Gensim taps into these low-level
BLAS libraries, by means of its dependency on NumPy. So while
gensim-the-top-level-code is pure Python, it actually executes highly
optimized Fortran/C under the hood, including multithreading (if your
BLAS is so configured).
Memory-wise, gensim makes heavy use of Python’s built-in generators and
iterators for streamed data processing. Memory efficiency was one of
gensim’s design goals, and is a central feature of gensim, rather than
something bolted on as an afterthought.
For commercial support, please see Gensim sponsorship.
Ask open-ended questions on the public Gensim Mailing List.
Raise bugs on Github but please make sure you follow the issue template. Issues that are not bugs or fail to provide the requested details will be closed without inspection.
| Company | Logo | Industry | Use of Gensim |
|---|---|---|---|
| RARE Technologies | ![]() |
ML & NLP consulting | Creators of Gensim – this is us! |
| Amazon | ![]() |
Retail | Document similarity. |
| National Institutes of Health | ![]() |
Health | Processing grants and publications with word2vec. |
| Cisco Security | ![]() |
Security | Large-scale fraud detection. |
| Mindseye | ![]() |
Legal | Similarities in legal documents. |
| Channel 4 | ![]() |
Media | Recommendation engine. |
| Talentpair | ![]() |
HR | Candidate matching in high-touch recruiting. |
| Juju | ![]() |
HR | Provide non-obvious related job suggestions. |
| Tailwind | ![]() |
Media | Post interesting and relevant content to Pinterest. |
| Issuu | ![]() |
Media | Gensim’s LDA module lies at the very core of the analysis we perform on each uploaded publication to figure out what it’s all about. |
| Search Metrics | ![]() |
Content Marketing | Gensim word2vec used for entity disambiguation in Search Engine Optimisation. |
| 12K Research | ![]() |
Media | Document similarity analysis on media articles. |
| Stillwater Supercomputing | ![]() |
Hardware | Document comprehension and association with word2vec. |
| SiteGround | ![]() |
Web hosting | An ensemble search engine which uses different embeddings models and similarities, including word2vec, WMD, and LDA. |
| Capital One | ![]() |
Finance | Topic modeling for customer complaints exploration. |
When citing gensim in academic papers and theses, please use this
BibTeX entry:
@inproceedings{rehurek_lrec,
title = {{Software Framework for Topic Modelling with Large Corpora}},
author = {Radim {\v R}eh{\r u}{\v r}ek and Petr Sojka},
booktitle = {{Proceedings of the LREC 2010 Workshop on New
Challenges for NLP Frameworks}},
pages = {45--50},
year = 2010,
month = May,
day = 22,
publisher = {ELRA},
address = {Valletta, Malta},
note={\url{http://is.muni.cz/publication/884893/en}},
language={English}
}
We use cookies
We use cookies to analyze traffic and improve your experience. You can accept or reject analytics cookies.