You probably know that generating some real random data is not so easy to do with a computer. How to design a good Random Number Generator (or a pseudo-random one) is a math topic that you can work years on ; it's also something very important for real-life applications such as security/cryptography, for example when you need to generate strong passwords.
Usually (and this is true in general in cryptography), designing your own algorithm is bad, because unless you're a professional in this subject and your algorithm has been approved by peers, you're guaranteed to have flaws in it, that could be exploited.
But here, for fun (don't use it for critical applications!), let's try to generate 100 MB of true random data.
1) Record 20 minutes of audio in 96khz 16bit mono with your computer's built-in microphone. Try to set the mic input level so that the average volume is neither 0 dB (saturation) nor -60 dB (too quiet). Something around -10 dB looks good. What kind of audio should you record? Nothing special, just the noise in your room is ok. You will get around 20*60*96000*2 ~ 220 MB of data. In these 220 MB, only the half will be really useful (because many values in the signal - an array of 16-bit integers - won't use the full 16-bit amplitude: many integers "encoding" the signal might be for example of absolute value < 1024, i.e. will provide only 10 bits)
2) Now let's shuffle these millions of bits of data with some Python code:
from scipy.io import wavfile import numpy as np import functools sr, x = wavfile.read('sound.wav') # read a mono audio file, recorded with your computer's built-in microphone #### GET A LIST OF ALL THE BITS L =  # list of bits for i in range(len(x)): bits = format(abs(x[i]), "b") # get binary representation of the data # don't use "016b" format because it would create a bias: small integers (those not using # the full bit 16-bit amplitude) would have many leading 0s! L += map(int, bits)[1:] # discard the first bit, which is always 1! print L.count(1) print L.count(0) # check if it's equidistributed in 0s and 1s n = 2 ** int(np.log2(len(L))) L = L[:n] # crop the array of bits so that the length is a power of 2; well the only requirement is that len(L) is coprime with p (see below) ### RECREATE A NEW BINARY FILE WITH ALL THESE BITS (SHUFFLED) # The trick is: don't use **consecutive bits**, as it would recreate something close to the input audio data. # Let's take one bit every 96263 bits instead! Why 96263? Because it's a prime number, then we are guaranteed that # 0 * 96263 mod n, 1 * 96263 mod n, 2 * 96263 mod n, ..., (n-1) * 96263 mod n will cover [0, 1, ..., n-1]. (**) # This is true since 96263 is coprime with n. In math language: 96253 is a "generator" of (Z/nZ, +). p = 96263 # The higher this prime number, the better the shuffling of the bits! # If you have at least one minute of audio, you have at least 45 millions of useful bits already, # so you could take p = 41716139 (just a random prime number I like around 40M) M = set() with open('truerandom', 'wb') as f: for i in range(0, n, 8): M.update(set([(k * p) % n for k in range(i, i+8)])) # this is optional, here just to prove that our math claim (**) is true c = [L[(k * p) % n] for k in range(i, i+8)] # take 8 bits, in shuffled order char = chr(functools.reduce(lambda a, b: a * 2 + b, c)) # create a char with it f.write(char) print M == set(range(n)) # True, this shows that the assertion (**) before is true. Math rulez!
truerandom file should be truly random data!
The only issue I can see happen right now is if the ADC (analog-to-digital-converter) electronic component of your soundchip is highly biased (please drop me a message if you have such a device).
This code here is unoptimized, it took 2 minutes for 1 minute of audio. There's surely a better way to work with arrays of bits in Python, comments/improvements are welcome!
- How to test the randomness quality of this file? This is a complicated task, and here are some references to do that. This is very far from being a rigorous way to do it, but it can be a first step (quote from the linked page): I've seen winzip used as a tool to measure the randomness of a file of values before (obviously, the smaller it can compress the file the less random it is). If you do it on the file generated here, you get exactly the same size (or even a bit more) after zip-compressing the file! Idem with rar, 7z (which usually yield a far better compression ratio, especially for audio data), the compression ratio is 1:1.
When making instrument sample sets (e.g. church organ sample sets used with Hauptwerk or GrandOrgue, see my project Jeux d'orgues), we need to set looping points in WAV audio files:
such that when playing the part [a, b] in loop, we don't hear any click or pop when the sample reaches the end of the loop.
Example 1: bad loop with audible clicks
Example 2: seamless loop with no click, that's what we are looking for! The loop has a ~ 2.670 second period, can you hear where are the looping points?
Finding looping points can be done manually but this is a very long and tedious task. A few programs exist to do this process automatically such as Extreme Sample Converter (it has an excellent auto-looping algorithm), LoopAuditioneer (open source), Zero-X Seamless Looper, SampleLooper, etc.
Here we'll look at a home-cooked algorithm that works well to detect looping points.
First of all, let's load the audio file (downloadable here) with Python:
from scipy.io import wavfile import numpy as np import itertools sr, x = wavfile.read('060.wav') x0 = x if x.ndim == 1 else x[:, 0] # let's keep only 1 channel for simplicity, but we could easily generalize this for 2 channels x0 = np.asarray(x0, dtype=np.float32)
Let's say the audio file's sustain part (this is precisely where we're looking for a loop!) begins at t=2 sec and finishes at t=9 sec. We will now subdivide the time-interval [2 sec, 9 sec] into a 250 milliseconds grid: 2, 2.25, 2.5, 2.75, 3, 3.25, ..., 8.75, 9.
From this sequence, we now create "loop candidates" (a, b) of length at least 1 second, example: (2.5, 7.5), (3.25, 5.75), (6.0, 8.75), etc.
Then, for each loop candidate, we'll improve the loop (this is the core of the algorithm, it will be discussed in the next paragraph) and compute a distance
We finally keep the loop that has the minimal distance (among all loop candidates). Finished!
A = [int((2 + 0.25 * k) * sr) for k in range(29)] # the grid 2, 2.25, 2.5, ... 8.75, 9 dist = np.inf for a, b in itertools.product(A, A): # cartesian product: pairs (a, b) of points on the grid if b - a < 1 * sr: continue a, B, d = improveloop(x0, a, b, sr=sr) print 'Loop (%.3fs, %.3fs) improved to (%.3fs, %.3fs), distance: %i' % (a * 1.0 / sr, b * 1.0 / sr, a * 1.0 / sr, B * 1.0 / sr, d) if d < dist: aa = a BB = B dist = d print "The final loop is (%.3fs, %.3fs), i.e. (%i, %i)." % (aa * 1.0 / sr, BB * 1.0 / sr, aa, BB)
Finished? Not yet! We need to explain what we mean by improving a loop, as that's the crucial part of the algorithm. More precisely, we'll now explain how to transform a loop (3.25, 5.75) with points taken on the grid (this random loop probably "clicks" like in Example 1 before!) into a "good loop" (3.25, 5.831). Let's zoom on the junction point to understand what's going on:
How to measure if a loop is good or not? Ideally, if the loop (a, b) is perfect/seamless,
x[a:a+10 ms] should be very close to
Measuring how close two arrays
y are can be done by computing
sum((x[n]-y[n])^2), and if the sum is small,
y are close.
k such that
np.sum(np.abs(x0[a:a+W1]-x0[k+b:k+b+W1])**2) is minimal can be obtained by noting that
(x[n] - y[n+k])**2 = x[n]**2 - 2*x[n]*y[n+k] + y[n+k]**2
and by using numpy.correlate. We can now define this function:
def improveloop(x0, a, b, sr=44100, w1=0.010, w2=0.100): """ Input: (a, b) is a loop Output: (a, B) is a better loop distance (the less the distance the better the loop) This function moves the loop's endpoint b to B (up to 100 ms further) such that (a, B) is a "better" loop, i.e. sum((x0[a:a+10ms] - x0[B:B+10ms])^2) is minimal """ W1 = int(w1*sr) W2 = int(w2*sr) x = x0[a:a+W1] y = x0[b:b+W2] delta = np.sum(x**2) - 2*np.correlate(y, x) + np.correlate(y**2, np.ones_like(x)) K = np.argmin(delta) B = K + b distance = delta[K] return a, B, distance
That's it, in less than 50 lines of Python code!
This audio file
(looped 4 times here but we could loop it forever) has been obtained with the algorithm described here. Not too bad, n'est-ce pas?
Example of output:
Loop (2.000s, 3.000s) improved to (2.000s, 3.009s), distance: 1003724800 Loop (2.000s, 3.250s) improved to (2.000s, 3.340s), distance: 839278592 Loop (2.000s, 3.500s) improved to (2.000s, 3.559s), distance: 1281863680 [...] Loop (2.000s, 8.500s) improved to (2.000s, 8.544s), distance: 1092337664 Loop (2.000s, 8.750s) improved to (2.000s, 8.789s), distance: 964747264 Loop (2.000s, 9.000s) improved to (2.000s, 9.004s), distance: 2488913920 [...] Loop (7.750s, 9.000s) improved to (7.750s, 9.004s), distance: 1167093760 Loop (8.000s, 9.000s) improved to (8.000s, 9.001s), distance: 1710333952 The final loop is (6.750s, 8.322s), i.e. (297675, 366989).
Note: Wouldn't it be possible to save these loop markers inside the WAV file's metadata instead of just printing them on screen? Sure it is, but as Python's standard library doesn't support WAV markers editing, you'll have to use these techniques to do this.
Python comes with the built-in
wave module and for most use cases, it's enough to read and write .wav audio files.
But in some cases, you need to be able to work with 24 or 32-bit audio files, to read cue markers, loop markers or other metadata (required for example when designing a sampler software). As I needed this for various projects such as SamplerBox, here are some contributions I made:
that adds some little useful things. (See Revision #1 to see diff with the original stdlib code).
from wave import open f = open('Take1.wav') print(f.getmarkers())
If you're familiar with main Python repositery contributions (I'm not), feel free to include these additions there.
The module scipy.io.wavfile is very useful too. So here is an enhanced version:
Among other things, it adds 24-bit and 32-bit IEEE support, cue marker & cue marker labels support, pitch metadata, etc.
from wavfile import read, write (sr, samples, br, cue, cuelabels, cuelist, loops, f0) = read('Take1.wav', readmarkers=True, readmarkerlabels=True, readmarkerslist=True, readpitch=True, readloops=True) print read('Take1.wav', readmarkers=True, readmarkerlabels=True, readmarkerslist=True, readpitch=True, readloops=True) write('Take2.wav', sr, samples, bitrate=br, markers=cue, loops=loops, pitch=130.82) print read('Take2.wav', readmarkers=True, readmarkerlabels=True, readmarkerslist=True, readpitch=True, readloops=True) write('Take3.wav', sr, samples, bitrate=br, markers=cuelist, loops=loops, pitch=130.82)
Here is how loop markers look like in the good old (non open-source but soooo useful) SoundForge:
Lastly, this is how to convert a WAV to MP3 with pydub, for future reference. As usual, do
pip install pydub and make sure
ffmpeg is in the system path. Then:
from pydub import AudioSegment song = AudioSegment.from_wav("test.wav") song.export("test.mp3", format="mp3", bitrate="256k")
will convert a WAV file to MP3.
Since I've started using StackOverflow, I've always loved their text editor (the one you use when writing a question/answer), because it supports Markdown syntax (a very elegant markup language to add bold, italic, titles, links, itemization, etc.), and even MathJax (which is more or less LaTeX syntax in the browser). I've always wanted to use such an editor for my own documents.
After some research, I found a few existing tools, but:
- half of them don't support LaTeX / MathJax (for math formulas)
- some of them do, but have a 1-sec delay between keypress and display, and I find this annoying, see e.g. StackEdit
- some of them have annoying flickering each time you write new text, once math is present on the page
- most of them are not minimalist / distraction-free enough for me
Let's go and actual build one! Here is the result, Writing:
Here's the source: https://github.com/josephernest/writing
For sure you'll like it!
If you really like that, you can donate here: 1NkhiexP8NgKadN7yPrKg26Y4DN4hTsXbz
Have you ever spent more than 1 second wondering:
"How do I get on my computer this photo I just made with my phone?"
"How do I get this PDF from my computer to my phone?"
Then you probably thought "Let's use Dropbox! ... oh no I'm not logged in on my phone, but what is my password again? Well, let's send the file to myself via email! Maybe I should just use a USB cable... but where is my USB cable again?"
Yopp is a solution for this problem, that you can easily install on your web server.
Thoughts about user experience & user interface design
This tool - Yopp - requires a total number of 7 actions to get the work done:
Open browser on phone [1 tap], Open Yopp page [1 tap if it's in the bookmarks], UPLOAD [1 tap], Choose file [1 action] Open browser on computer [1 double click], Open Yopp page [1 click if in bookmarks], DOWNLOAD [1 click]
I'll be happy to switch to another tool if one requiring less actions exists.
I noticed that my likelihood/probability to use any tool (all other things being equal) is more or less proportional to
P = 1 / a^2 (*) where
a is the number of required actions/user inputs. If the number of required actions is doubled, the likelihood to use the tool is divided by 4.
Thus, even if it might sound obvious, one key element for a good user interface is to minimize the number of user actions to get a task done. If not, the user might unconsciously remember that the interface is unnecessarily complicated to use. He will then forget about the product, and look for another solution. (OK this is probably what will happen for you with Yopp if you don't have a web server already!)
As an example, I'm sure I'd use my city's bicycle sharing system Velo+ much more if I could take a bike by just swiping my card on the bike station's card reader (this is technically possible). Instead we have to: Tap on a screen (1), Choose "Subscribed user" (2), Swipe the card (3), Choose "Rent a bike" (4) (this one is particularly unuseful), Accept conditions already accepted many times before (5), etc. at the end it requires at least 12 actions! Any user who has done it at least once will process this data (required amount of inputs) and will probably make the choice of not using it for short distance trips.
It would be interesting to get more statistical data about the empirical result (*), this will be discussed in a future post.
This topic has been present in my thoughts for a long time, probably years:
“How to be able to think/write about lots of unrelated various topics, and still have a way to look at the big picture of what you’re doing?”
Here is my contribution about this:
- bigpictu.re, a ready-to-use infinite notepad (infinite zooming and panning)
- A standalone version of 1. (so you can take notes offline) is also available here: bigpicture-editor
- AReallyBigPage, an infinite collaborative notepad. It has been a real chaos once hundreds of people joined in. Probably internet’s deepest page ;)
Such an interface is called a Zooming User Interface (interesting reading: The humane interface by Jef Raskin, one of the creators of the Apple Macintosh), and strangely, ZUI has been very few used in modern interfaces.
As of 2017, nearly every software interface uses a 2D, or even a 1D navigation process: a web page only offers two scrolling directions: north and south. Even nowadays's apps famous for their "new kind of interface" still use a 1-axis navigation: "Swipe left or right".
Is there a future made of new interfaces?
After having tested many open-source website analytics tool, and haven't found exactly what I was looking for, I started a minimalist project (coded in PHP) that only does this:
number of visits per day
- display the referrers (i.e. the people who have a link to your website)
If you're looking for a tool lighter than Piwik, Open Web Analytics or Google Analytics, then TinyAnalytics might be what you're looking for.
You discovered Google Analytics a few years ago (a webmaster tool to see how many visits on your websites), and used it efficiently. But, you know, Google-centralized internet, etc. and then you thought "Let's go self-hosted and open-source!". And then you tried Piwik and Open Web Analytics.
I did the same. After a few months, here are my conclusions.
Open Web Analytics has a great look, close to Google Analytics, but every week, I had to deal with new issues:
- first I discovered that a gigantic table was growing in the MySQL database:
| owa | owa_request | 4.44 | | owa | owa_click | 5.30 | | owa | owa_domstream | 238.28 | +--------------------+-------------------------------+------------+
Nearly 250 MB analytics data in 2 weeks (for only a few small websites), this means more than 6 GB of analytics data per year in the MySQL database! ... or even 60 GB per year if you have 100k+ pageviews. That's far too much for my server. This was (nearly) solved by disabling Domstream feature. (Ok Domstream is a great feature, but I would have liked to know in advance that this would eat so much in the database).
today I've seen that a new table in the OWA database was very big (747 MB in a few weeks!)
| owaa | owa_queue_item | 747.92 | +--------------------+-------------------------------+------------+
- some other issues: login impossible from Chrome in certain situations, unique visitors count wrong when using PHP tracker (sometimes, each new visit / refresh of the page is considered as a new visitor), time-range menu not displayed at all (display stuck on 1-week range) in some cases, etc.
I'm not saying OWA is bad: Open Web Analytics is a good open-source solution, but if and only if you have time to spend, on a regular basis, on configuration issues, which I sadly don't have.
I tried Piwik very quickly. It really is a great project but:
it doesn't offer a direct view of what I was looking for out-of-the box, i.e. clear charts for every website à la Google Analytics (I can't really describe what's the problem, but the user interface isn't handy for me)
- maybe there's an easy fix for this, but the interface is very slow
Analytics, unsolved problem.
I'm still looking for a lightweight self-hosted solution. Until then, I'll probably have to use Google Analytics again.
PS: No offence meant: most of my work is open-source too, and I know that it takes time to build a stable mature tool. This post is just reflecting the end-of-2016 situation.