If you work on applications for clients or have open sourced some shiny apps, a question that arises is how is your application being used. What you can do in order to find out how your hard work is being consumed is putting your code in logs and then viewing the logs.
An easier way however to track usage of your application is just sending page views or application events to Google Analytics. That's exactly what the GAlogger R package (https://github.com/bnosac/GAlogger) is doing. It allows to log R events and R usage to Google Analytics and was created with the following use cases in mind:
Track usage of your application
- If someone visits a page in your web application (e.g. Shiny) or web service (e.g. RApache, Plumber), use the GAlogger R package to send the page and title of the page which is visited so that you can easily see how visitors are using your application
- Do you want to know which user inputs are set in your Shiny app, you can now collect these events easily with this R package
Track usage of your scripts / package usage / functions
- Keep track on how your internal useRs are using your package (e.g. when a user loads your package or uses a specific function or webservice)
- Do you want to keep track on the status of a long-running process or keep track of an error message if something failed.
First of all, get the R package from https://github.com/bnosac/GAlogger
Get your own free tracking ID from Google Analytics (it looks like UA-XXXXX-Y), set it as shown below and indicate that you approve that data will be send to Google Analytics. Put that code in your shiny app or R script.
ga_set_approval(consent = TRUE)
Next start sending data to Google Analytics. You can either send page visits or events.
Someone is visiting your web service or shiny web application, great, log it is follows.
ga_collect_pageview(page = "/home")
ga_collect_pageview(page = "/simulation", title = "Mixture process")
ga_collect_pageview(page = "/simulation/bayesian")
ga_collect_pageview(page = "/textmining-exploratory")
ga_collect_pageview(page = "/my/killer/app")
ga_collect_pageview(page = "/home", title = "Homepage", hostname = "www.xyz.com")
An event is happening in your app or R code, great, log it is follows.
ga_collect_event(event_category = "Start", event_action = "shiny app launched")
ga_collect_event(event_category = "Error", event_label = "convergence failed", event_action = "Oh no")
ga_collect_event(event_category = "Error", event_label = "Bad input",
event_action = "send the firesquad", event_value = 911)
ga_collect_event(event_category = "Simulation", event_label = "Launching Bayesian multi-level model",
event_action = "How many simulations", event_value = 10)
Visit Google Analytics to see who visited you or what happened in your script
- Logged pageviews can be viewed in the Google Analytics > Behaviour tab or in the Real-Time part of Google Analytics
- Logged events can be viewed in the Google Analytics > Behaviour > Events tab or in the Real-Time part of Google Analytics
BNOSAC is happy to announce the release of the udpipe R package (https://bnosac.github.io/udpipe/en) which is a Natural Language Processing toolkit that provides language-agnostic 'tokenization', 'parts of speech tagging', 'lemmatization', 'morphological feature tagging' and 'dependency parsing' of raw text. Next to text parsing, the package also allows you to train annotation models based on data of 'treebanks' in 'CoNLL-U' format as provided at http://universaldependencies.org/format.html.
The package provides direct access to language models trained on more than 50 languages. The following languages are directly available:
afrikaans, ancient_greek-proiel, ancient_greek, arabic, basque, belarusian, bulgarian, catalan, chinese, coptic, croatian, czech-cac, czech-cltt, czech, danish, dutch-lassysmall, dutch, english-lines, english-partut, english, estonian, finnish-ftb, finnish, french-partut, french-sequoia, french, galician-treegal, galician, german, gothic, greek, hebrew, hindi, hungarian, indonesian, irish, italian, japanese, kazakh, korean, latin-ittb, latin-proiel, latin, latvian, lithuanian, norwegian-bokmaal, norwegian-nynorsk, old_church_slavonic, persian, polish, portuguese-br, portuguese, romanian, russian-syntagrus, russian, sanskrit, serbian, slovak, slovenian-sst, slovenian, spanish-ancora, spanish, swedish-lines, swedish, tamil, turkish, ukrainian, urdu, uyghur, vietnamese
We hope that the package will allow other R users to build natural language applications on top of the resulting parts of speech tags, tokens, morphological features and dependency parsing output. And we hope in particular that applications will arise which are not limited to English only (like the textrank R package or the cleanNLP package to name a few)
Easy installation, great docs
- Note that the package has no external software dependencies (no java nor python) and depends only on 2 R packages (Rcpp and data.table), which makes the package easy to install on any platform. The package is available for download at https://CRAN.R-project.org/package=udpipe and is developed at https://github.com/bnosac/udpipe. A small docusaurus website is made available at https://bnosac.github.io/udpipe/en
- We hope you enjoy using it and we would like to thank Milan Straka for all the efforts done on UDPipe as well as all persons involved in http://universaldependencies.org
Training on Text Mining with R
Are you interested in text mining. Feel free to register for the upcoming course on text mining
Want to get started with it right away? Example annotating Polish text in UTF-8 encoding, but you can pick any language of choice listed above. Enjoy.
model <- udpipe_download_model(language = "polish")
model <- udpipe_load_model(file = model$file_model)
x <- udpipe_annotate(model, x = "Budynek otrzymany od parafii wymaga remontu, a placówka nie otrzymała jeszcze żadnej dotacji.")
x <- as.data.frame(x)
BNOSAC is working on building an application on top of open data from questions and answers given at the parliament in Belgium. It will basically show what our civil servants in parliament are busy with. If you are interested in co-developing, feel free to get in touch for a quick chat. For those of you interested in an overview of open data available in Belgium, we've made a presentation showing what open data is available in Belgium for direct use (see below).
Interested in how open data can be used for your business, get in touch.
CRAN contains up to date (October 2017) more than 11500 R packages. If you want to scroll through all of these, you probably need to spend a few days, assuming you need 5 seconds per package and there are 8 hours in a day.
Since R version 3.4, we can also get a dataset will all packages, their dependencies, the package title, the description and even the installation errors which the packages have. Which makes the CRAN database with all packages an excellent dataset for doing text mining. If you want to get that dataset, just do as follows in R:
crandb <- CRAN_package_db()
Based on that data the following CRAN NLP searcher app was built as shown below. I'ts available for inspection at http://datatailor.be:9999/app/cran_search and is a tiny wrapper around the result of annotating the package title and package description using the udpipe R package: https://github.com/bnosac/udpipe
If you want to easily extract what is written in text without reading it, a common way is to do Parts of Speech tagging, extract the nouns and/or the verbs and then plot all co-occurrences / correlations and frequencies of the lemmata. The updipe package allows you exactly to do that. Annotating using Parts of Speech tagging, is pretty easy with udpipe_annotate function from the the udpipe R package (https://github.com/bnosac/udpipe). Mark that this takes a while (as in +/- 30 minutes) and is probably something you want to run as a web-service or integrated stored procedure.
ud_model <- udpipe_download_model(language = "english")
ud_model <- udpipe_load_model(ud_model$file_model)
crandb_annotated <- udpipe_annotate(ud_model,
x = paste(crandb$Title, crandb$Description, sep = " \n "),
doc_id = crandb$Package)
crandb_annotated <- as.data.frame(crandb_annotated)
Once we have that data annotated, making a web application which allows you to visualise, structure and display the CRAN packages content is pretty easy with tools like flexdashboard. That's exactly what this web application available at http://datatailor.be:9999/app/cran_search does. The application allows you
- List all packages which are part of a CRAN Task View
- To search for CRAN packages based on what the author has written in the package title and description
- Based on the found CRAN packages which were searched for: Visualise the nouns and verbs in the package title and descriptions by using
- Word-coocurrence graphs indicating how many times each lemma occurs in the same package as another lemma
- Word-correlation graphs showing the positive correlations between the top n most occurring lemma's in the packages
- Word clouds indicating the frequency of nouns/verbs or consecutive nouns/verbs (bigrams) in the package descriptions
- Build a topic model (latent dirichlet allocation) to cluster packages and visualise them
The web application (flexdashboard) was launched on a small shinyproxy server and is available here: http://datatailor.be:9999/app/cran_search. Can you find topics which are not yet covered by the CRAN task views yet? Can you find the content of the Rcpp-universe or the sp package universe?
If you are interested in these techniques, you can always subscribe for our text mining with R course at the following dates: