This post shows how to simulate animal vocalizations using the new warbleR function sim_songs. The function allows users to create song with several sub-units and harmonics, which are return as a wave object in the R environment. This can have several applications, from simulating song evolution to testing the efficacy of methods to measure acoustic structure for different signal types.
The function uses a brownian motion model of stochastic diffusion to simulate the changes in frequency (e.g. modulation) across continuous traces of sound (song sub-units or elements). Several parameters of the songs can be customized, which allow users to simulate an wide range of signal structures. Nonetheless, more parameters and diffusion models will be added in future versions.
First install and/or load warbleR developmental version (if there is an older warbleR version installed it has to be removed first):
The basic song “parameters” that can be customized using the sim_songs function are:
The number of sub-units
The number of harmonics
the carrier frequencies
The degree of frequency modulation
The duration of sub-units and silences in between
The following code simulates a “song” with 5 sub-units, with a starting frequency of 5 kHz and a single harmonic (fundamental):
* A seed is used so the same song will be generated every time. If `seed = NULL` (as by default) then a different song will produced each time
Longer “repertoires” can be produced by simply increasing n:
The amount of modulation can be controlled with the argument sig2 (just as in the brownian model of evolution). Low values will produce little variation in the frequency slope:
While higher sig2 values will generate “faster” frequency changes:
Harmonics can also be added to the songs by modifying the harms argument:
The steps argument allows to define the length of the time series generated by the underlying brownian motion function. This time series is simply the frequency values at each time window (after some spline smoothing) of the song elements. Lower steps values will produce less modulated songs (other things equal):
Higher steps values will increase modulation:
The duration of the sub-units and gaps between elements can also be specified for each of the items:
Users can also adjust the relative amplitude of the harmonics. The following code puts the highest energy on the second harmonic (e.g. the dominant frequency) using the amps argument:
The simulated song can be played with the play function from tuneR or by saving the wave file and opening it in a regular audio player. Here is an example of a 20 element simulated song:
And this is how it sounds:
In the above example the “BB” diffusion model was used to simulate the sounds (argument diff_fun). This method forces the sub-units to start and end at the same frequency. Note also that combining random variables to set durations and gaps can help create more realistic songs.
Finally, the background noise level (argument bgn), sampling rate (samp.rate) amd amplitude fade-in (fin) and fade-out (fout & shape) can also be adjusted.
More options will be added in future versions. Suggestions are welcome. I am particularly interested on diffusion models (or any other algorithm) that can generate different types of signals, ideally even more similar to the ones found in nature.