Archive for the ‘Pediaphon’ Category

Generating MP3-Audio out of the Simple English Wikipedia with Pediaphon

Tuesday, January 30th, 2018

An ESL (English as a Second Language) teacher has contacted me with the great idea, to integrate the Simple English Wikipedia into the Pediaphon service. I had contact to english language teachers before, but I have simply overseen the great potential of the Simple English language Wikipedia for teaching.

Now it is realized and ready for a test:
https://www.pediaphon.org/~bischoff/radiopedia/index_en_simple.html

I hope this will be usefull for English teachers and students all over the world to simply generate teaching material for free.

Speak, friend, and enter – Speech synthesis with eSpeak and SVOX-pico for the Raspberry Pi

Tuesday, March 25th, 2014

Speech synthesis is a nice feature for embedded systems and tiny computers not only for home automation. In case of lacking display a sound (speech) output may be a cheap alternative to provide the user with information. For example a mobile device, which gets an IP address via DHCP in changing environments is able to announce its IP via speech output even if no connection to the Internet is established. I have realized this feature before in 2006 to remotely control an ActiveMedia Pioneer 3AT mobile robot via web interface. In 2006 I have used Mbrola-TTS for this task.

 

 

My mobile Activemedia Pioneer 3AT Robot at University of Hagen in 2006, realized with MBROLA TTS

 

To realize a similar functionality for a raspberry Pi you can simply install eSpeak, an open source Text-to-Speech (TTS) software wich is available in raspbian out of the box. It supports over 20 langugages and has a small memory footprint. To install it type:

sudo apt-get install espeak

To test the speech synthesis simply type

sudo amixer cset numid=3 1

espeak -ven “hello world this is raspberry pi talking”

The alsamixer command is required to select the phone jack as output, default is the HDMI out.

Better speech quality with Android SVOX pico TTS engine

To get much better speech quality out of your Raspberry Pi I would recommend the open source Android SVOX pico TTS engine. It supports high quality speech generation in five languages (en [uk|us], de, fr, it, es). I am using the same TTS engine for my free Pediaphon Wikipedia TTS service. SVOX pico is the best open source TTS engine available for Linux.

Unfortunately there is no prebuild binary available in the Raspbian distribution. But is is possible to build it from source yourself or simply download my prebuild binary ARM deb packet for raspbian. (MD5 hash: b530eb9ff97b9cf079f622efe46ce513) and install it.

apt-get install libpopt-dev
sudo dpkg --install pico2wave.deb

To test it try
sudo amixer cset numid=3 1
pico2wave --lang=en-US --wave=/tmp/test.wav "hello world this is raspberry pi talking"; play /tmp/test.wav;rm /tmp/test.wav

Instruction to build pico2wave from source:
git clone -b upstream+patches git://git.debian.org/collab-maint/svox.git svox-pico
apt-get install automake libtool libpopt-dev
automake
./autogen.sh
./configure
make all
make install

To speak the IP address every time the PI boots up add the following commands to the /etc/rc.local :

/usr/bin/amixer cset numid=3 1
/usr/bin/espeak -vde "my I P address is $_IP"

Enjoy!

Raspberry Pi versus Cray XT 6m supercomputer – Raspberry Pi calculates MD5 hash collisions

Friday, August 31st, 2012

The Raspberry Pi is a small  ARM11 based board, (ARM1176JZF-S with ARMv6 instruction set), with  100 Mbit Ethernet port, HDMI,  analog video, GPIO pins, SPI, I²C, UART and two USB interfaces. The processor is identical to the  first genation Apple iPhone CPU.

Das kommt in der Verpackung, eine SD-Karte ist nicht dabei.
What you get – no SD card  included

 

The Raspberry Pi just costs  about 25-30 $ and was designed for educational use in mind. You can order the device in GB for 34 € with shipping to Germany and a T-shirt included. Because of the low power consumption (3,5 watt, fan less, no heat sink required) , its tiny size  (credit card form factor) and the  low price the Raspberry Pi is an ideal device for developing energy efficient solutions like NAS, routers and media centers. The Raspberry Pi uses am SD card as mass storage device, which can be deployed whith a proper Linux distribution like  Raspbian “wheezy”, a modified  Debian. The  distribution consist out of a modern Linux with a resource optimized  desktop and a slim Webkit based browser (Midori). Regardless the Raspberry was not designed for such a use case, it is fast enough to surf the Internet!

T-Shirt inklusive, der Raspberry Pi
T-shirt inclusive, the Raspberry Pi

 

The custom Linux distribution   Raspbian “wheezy” is strongly recommended because of the hardware floating point support for the ARM11 processor. The Debian ARM “arml” distribution (ARMv4t, ARMv5 and ARMv6 devices) version is  lacks support for  the hardware floating point capabilities of the Arm11 (ARMv6). The Debian “armhf” and the Ubuntu ARM  distributions are supporting  only  ARMv7 instruction set devices (minimum ARM Cortex A8).

 

To be sure to make use of hardware floating point set the following compiler options:

-mcpu=arm1176jzf-s -mfpu=vfp -mfloat-abi=hard

Otherwise the  floating point operations will be emulated in software, which is approximately 10 times slower.

 

ARM processor guide and Android Tablet buying hints

ARM processors, or on  a Chip  integrated ARM cores (System on a chip, SoC), driving mostly  all current Android smartphones and tablets. The Apple-A5 SoC in the iPhone and iPad are also based on ARM cores. Apart of the processor for the user environment (e.g. Android or IOS) there are several other ARM cores integrated in a modern smartphone. Smaller energy efficient ARM cores are used for instance in the radio device of the phone, which executes all the GSM/UMTS/2G/3G communication tasks. Most of today’s Bluetooth and GPS chip sets  contain a ARM core too. Likely your smantphone is equipped with  four  or more integrated ARM cores in different chip sets. The naming convention of the instruction sets of ARM processors should not  be confused with naming of the ARM architecture (see http://en.wikipedia.org/wiki/ARM_architecture). Also the custom identification of  ‘system on a chip’ (SoC) from different manufactures does not correspond to  the standard ARM architecture naming conventions. This site gives some hints which ARM core is integrated in different SoC of common vendors. The site is very useful to compare the performance of common SoC which are integrated in Android tablets. To give you a hint: Don’t buy a device with a ARM core less than Cortex A8. Tablets with an ARM Cortex A8 CPU are available nowadays for ~ 100 €. But expect an iPad 3 like performance form a device with at least ARM Cortex A9 (ARMv7) with several cores.

Der Desktop der Raspbian “wheezy” Distribution, Midori Webbrowser inklusive
Desktop of Raspbian “wheezy” distribution, Midori web browser

 

MD5 hash collision

To test the computing performance of the Raspberry Pi ARM11 processor (ARMv6) I did not choose a standard benchmark but the  MD5 Collision Demo from Peter Selinger.

It implements a algorithm to attack a MD5 hash to compute a collision. These collisions are useful to create a second (hacked) binary (or document) with an identical hash value of a original binary. The algorithm always starts with a random number to compute a hash collision. In case of more processes of the algorithm on more CPUs or cores with different random start values, the chance to find a collision increases.  The algorithm itself is not paralleled, but profits from different random start values.

 

PC versus …

This first run was done on a  single core Atom netbook (2 h 46 m). A 8 core Intel machine (two Xeon Quad Core processors) need only 16 minutes and 6 seconds to find a collision. Only one core found a collision after that time, the last core found a collision not before 3 hours. (8 times 100% CPU load,see pic. )

Das top-Kommando (1 drücken um alle Kerne zu sehen)
tho top command (press ’1′ to see all cores)

 

… CRAY versus ….

I have tested the Cray XT 6m supercomputer of the  University of Duisburg-Essen  in june 2010 with the same tasks before. I was limited to 300 of overall 4128 cores at that time.  One of the cores found  a hash value collision after 56 seconds. Oh the Cray it is easy to start such a job automatically on all available cores at a time.

Cray Supercomputer
Cray supercomputer at the University of Duisburg Essen

 

… Raspberry PI

The  small Raspberry Pi found a hash collision after  30 hours and 15 minutes. The algorithm is not a real benchmark, it is possible to get a disadvantageous random start value with bad luck. Two other runs ended each after 19 hours and 10 minutes and 29 hour and 28 minutes. But how the Rasberry Pi compares to the Cray in energy efficiency?

 

Preiswerter und leiser als ein Cray Supercomputer bei etwa gleicher Rechenleistung pro Energieverbrauch
Raspberry Pi – cheaper and not so noisy but slower than a Cray supercomputer. Surprisingly similar energy drain in relation to the computation power.

 

The two Cray Units at the University of Duisburg-Essen consume 40 kW each. To  dissipate the heat the climatisation needs the same amount of engine power. Over all power consumption of the plant is about 160 KW. Relative to the 300 cores used in the experiment the power consumption is about 11.6 KW. In 56 seconds the plan uses 0.18 KWh of electrical energy. The Raspberry Pi power consumption is about 0,0035 KW, after 30,25 hours the energy usage is  about 0.106 KWh. If I did not consider the energy for the climatisation, surprisingly the energy drain in relation to the computation power is more or less similar for both devices!

The new iWebkit Pediaphon interface for Android, iPhone, iPad and iPad

Friday, June 24th, 2011

The HTML5 audio tag is supported by iPhones, iPods and iPad since IOS3. Android supports HTML5 audio since version 2.3 gingerbread too. I was curious if the shared base of the browser in both worlds is usable to create a unique touch based interface for a web based application like the Pediaphon. Most of todays mobile web browsers (Android, Nokia S60, Palm Pre, Openmocko, … ) are relying of the free and open source Webkit rendering engine. Initially developed as rendering engine for KDE-KHTML later adopted by Apple and nowadays supported even by Google (Chrome and Android web browser) Webkit is a lean but powerful rendering engine not only for mobile devices. If you are reading this blog with a Mac you are actually using the Webkit rendering engine. A really lean Webkit based browser for Linux/Windows I can recommend is Midori.

I am a great fan of the concept to prefer web based standardized applications to native apps for proprietary devices for e- and m-learning purposes. Development resources are usually rare especial at universities. It is always a good idea to focus on standard reusable HTML component rather than wasting time an effort for proprietary development app development.

My first idea was to use the multi platform sencha touch framework for the Pediaphon (a text-to-speech service for the Wikipedia) touch interface. A colleague pointed me to iWebkit, which is originally targeting the iPhone but it performs also nice on other Webkit-based browsers, like Androids stock web browser. It provides a IOS like touch interface to all Webkit-based browsers, the web page look an feel is exactly like a native iPhone app. For the programmer or integrator iWebkit is a lean and simple solution, even rudimentary HTLM knowledge is sufficient to use it.

Here the result, a touch interface for the Pediaphon, which converts Wikipedia articles into speech and realizes audio output just with the HTML5 audio tag, no plugin is required for IOS>=3 and Android 2.3 Gingerbread. For Android 2.2, which supports flash on some devices, there is still a flash option. The Pediaphon mobile touch interface offers 5 languages so far, of course English is included ;-) .

Screenshots:

Try it here: http://i-e.pediaphon.de

Enjoy!

Long awaited feature: HTML5 audio support in Android 2.3 gingerbread

Monday, February 21st, 2011

A long awaited feature: The Android 2.3 gingerbread webkit browser supports HTML5 audio naively like Apple iPad and iPhone!

Tested with the Pediaphon service on an emulated gingerbread device. On selected Android 2.1 and 2.2 devices like the Archos internet tablet 7.0 there is flash based audio support for the Pediaphon site too, but finally Android supports HTML5 audio naively.

android gingerbread supporting HTML5 audio in emulator

The Pediaphon at the CeBIT 2010 fair, Hannover, Germany 2.-6.3.2010, hall 9 / booth D06

Wednesday, February 17th, 2010

I am proud to announce that I will present the Pediaphon at the
CeBIT fair(Germany, Hannover 2.-6.3.2010, hall 9/booth D06) this year. I will present at the ‘Innovationsland Nordrhein-Westfalen’ booth, a presentation of all universities located in North Rhine-Westphalia. I will share my booth with the ‘mobile learning project’ of the University of Hagen (FernUniversität in Hagen). Because of the lack of time I will only be at the booth between 4th-6th March 2010.

At the ‘future talk” event I will present my work in a talk (Saturday, 6th March 2010, at 10.00, (hall 9, booth A30, German
language, I am sorry ;-)

Six more languages – the Pediaphon now supports Polish, Czech, Dutch, Italian, Swedish and Portuguese

Monday, April 27th, 2009

I am very proud to announce that six new languages are now supported by the Pediaphon, the computer generated speech interface to the Wikipedia:

  • Polish (Polski),
  • Czech
  • Italian (Italiano)
  • Swedish (Svenska)
  • Portuguese (Portugues)
  • Dutch (Nederlands)

Try it here.

Watch out for more languages! Coming soon!

Have fun!

Ballad of Wiki by Teru – the Pediaphon as a singer in a mix at ccmixter.org

Monday, March 3rd, 2008

I am very proud to announce that Pediaphon is now part of a piece of art!
Teru, a musician living in
Vancouver, Canada used the voice of the English language Pediaphon in a
song. The voice
is originally a British English male voice (known as ‘Roger’s voice’) from
the Mbrola project.
The very impressive song is
available at ccmixter.org, a cool site which provides musicians with samples released under creative commons licence. The title of the song is ‘Ballad of Wiki’.

Thank you very much Teru!

Don’t miss Teru’s other mixes at ccmixter.org!

The Pediaphon in Spanish language

Tuesday, February 12th, 2008

I have just released a Pediphon version for the Spanish language Wikipedia. The Spanish language version is still beta, but support for talking large numbers was added. Phone access to the spanish version is still a todo.

Try the the Spanish language version here!

Pediaphon as a location based service

Tuesday, February 12th, 2008

English Web version released!

Let your computer read out the Wikipedia article that fits best to your location! Use Wikipedia as a talking guide!

To extend the features of the Pediaphon service to a location based service some information about the users position is required.
The proposed approach assumes that the user is equipped with a GPS receiver, a cell phone or knows a least some address data of his location. In cases, where the user only knows some address
data, this information can easily be converted to GPS coordinates by the help of the Google maps API Web service. Even if only a PO.Box or city name is known, the Web service returns, relative to
the users real position, imprecise but nearby coordinates.

As a mobile user you can estimate your position with Google mobile my location without a GPS device! Unfortunately there is no open API to use it directly for external location based services like Pediaphon at this time.

With the help of a Web service at geonames.org it is possible to
get nearby geocoded Wikipedia articles of a given position. This approach is the so-called reverse geocoding. The Pediaphon location based service converts the given article to a spoken MP3 audio file on the fly for each position request. In a second step the
Web based Pediaphon service was enriched by a Google maps mashup. A Google map with markers of the users
position and the position of the geocoded nearby Wikipedia articles will be generated for each request, with the help of the Google maps API. Each marker provides on click a direct link to a
generated audio representation of the geocoded Wikipedia article. In case of an user equipped with a GPS receiver or cell phone (smartphone), this approach can be easily extended to an
automatic play back of the nearest Wikipedia article in case of a better fitting article with respect to user movements. This is the functionality of an automatic talking travel-guide in an unknown environment (e.g. a tourist guide).

Try the location based Pediaphon service here.



Google map mashup (satellite
view) with Pediaphon markers.