pagefile.sys forensics: Beware of Yara false positive due to Microsoft Defender artifacts

May 24th, 2022

Since I do a lot of forensics, I discovered Andrea Fortuna’s site with a lot of useful information. However, in one case he is (now, I assume it depends on the Windows 10 version, I have used Win10 EDU 21H2 for my research) wrong: pagefile.sys forensics:
https://andreafortuna.org/2019/04/17/how-to-extract-forensic-artifacts-from-pagefile-sys/
Yara and a scan for URL-artifacts with strings lead you to false positives caused by Microsoft Defender memory artifacts, even on a freshly installed Windows:
yara will find for me (fresh Windows 10 install, just after 5 minutes connected to the internet):

APT1_LIGHTBOLT pagefile.sys
Tofu_Backdoor pagefile.sys
APT9002Code pagefile.sys
APT9002Strings pagefile.sys
APT9002 pagefile.sys
Cobalt_functions pagefile.sys
NK_SSL_PROXY pagefile.sys
Industroyer_Malware_1 pagefile.sys
Industroyer_Malware_2 pagefile.sys
malware_red_leaves_memory pagefile.sys
GEN_PowerShell pagefile.sys
SharedStrings pagefile.sys
Trojan_W32_Gh0stMiancha_1_0_0 pagefile.sys
spyeye pagefile.sys
with_sqlite pagefile.sys
MALW_trickbot_bankBot pagefile.sys
XMRIG_Miner pagefile.sys
Ursnif pagefile.sys
easterjackpos pagefile.sys
Bolonyokte pagefile.sys
Cerberus pagefile.sys
DarkComet_1 pagefile.sys
xtreme_rat pagefile.sys
xtremrat pagefile.sys

which is definitely false positive!
I have used the malware_index.yar from

wget https://github.com/Yara-Rules/rules/archive/refs/heads/master.zip

Even if the freshly installed Windows 10 is completely isolated from the Network, yara will find some artifacts:

APT1_LIGHTBOLT pagefile.sys
GEN_PowerShell pagefile.sys
with_sqlite pagefile.sys
Bolonyokte pagefile.sys

The list of extracted URLs with the strings command

$ strings pagefile.sys | egrep "^https?://" | sort | uniq > url_findings.txt

will be detected itself as malware on Windows ;-). My assumption was that the origin of the malware artifact were malware signatures of the Windows Defender. To clarify this, I‘ve done some experiments with Windows 10 virtual machines under Linux.
Since Windows Defender is an integral part of Windows 10 and 11, it is not an easy task to remove Windows Defender completely from the fresh installation. All guides I have found didn‘t work with the current Windows 10 versions since 21H2 anymore. Finally, I have found a PowerShell script at Jeremy site bidouillesecurity.com:
https://bidouillesecurity.com/disable-windows-defender-in-powershell/
However, even with this script Windows update tries to download malware signatures, with will finally end up as artifacs in the pagefile.sys. Only if I fully block the internet access of the fresh created installation, no malware artifacts will appear in pagefile.sys. The prevention of Windows updates is not an easy task nowadays. Just setting ethernet to metered connection does not work anymore in current versions.
For my experiments, I have created virtual machines in VMWare Player, converted the VMDK images to raw format with qemu and grabbed the pagefile.sys out of the image for forensic investigation.

qemu-img convert -p -O raw /path/myWin10.vmdk  vm.raw

The setup loop device

losetup /dev/loop100 -P  vm.raw

Option P scan for Paritions, force loop device loop100 (-f shows the next free one but 100 should be always free)
Mount the image:

mount  /dev/loop100p3 /mnt/image

Copy it:

cp /mnt/image/pagefile.sys .

Yara scan:

yara /home/bischoff/yara-rules/rules-master/malware_index.yar pagefile.sys > yara_neu.log

URL extraction:

strings pagefile.sys | egrep "^https?://" | sort | uniq > alle_urls.txt

To wipe the original pagefile.sys completely I have used

shred -uvz /mnt/image/pagefile.sys

Unmounting the image und detach the loop device:

umount /mnt/image
losetup -d /dev/loop100

Convert the raw image back to a VMWare image:

qemu-img convert -p -O vmdk vm.raw  /path/myWin10.vmdk

Even if you can‘t get a memory dump of an infected machine, a hyberfile.sys or a pagegefile.sys provides the forensics engineer with indirect information about memory content. But be warned of Defender artifact, which could lead you into false positive detections.

Generating MP3-Audio out of the Simple English Wikipedia with Pediaphon

January 30th, 2018

An ESL (English as a Second Language) teacher has contacted me with the great idea, to integrate the Simple English Wikipedia into the Pediaphon service. I had contact to english language teachers before, but I have simply overseen the great potential of the Simple English language Wikipedia for teaching.

Now it is realized and ready for a test:
https://www.pediaphon.org/~bischoff/radiopedia/index_en_simple.html

I hope this will be usefull for English teachers and students all over the world to simply generate teaching material for free.

This is not okay Google – no spy device at your wrist – how to mute the microphone of Android wear smartwatches

June 22nd, 2015

A nice feature for Android Wear devices (smartwatches) is the integrated speech recognition. If you don’t like to use it all the time, you are lost, because Google has left out a configuration setting to disable the speech recognition. It is only disabled if you set your watch into flight mode. But the smartwatch becomes completely useless  in this mode, even the time can be displayed incorrectly. Every time you lift your hand, the watch is listening for the activation sequence – it is listening!. If you don’t like to wear a bugging device at your wrist all the time or don’t like to allow processing of your speech samples in cloud services (like me, yes I’m paranoid ;-), you need an app to mute the microphone. Permanently running speech recognition can also be dangerous and expensive. For instance somebody nearby, or on TV says “OK Google dial 0900-XXXX” and the phone dials an expensive number. Since I was unable to find such an app, I started to code my own.

My

My “mute wear mic” app protects your privacy

The final app adds the missing feature to Android wear. It mutes the microphone of wear devices and disables the wear speech recognition without flight mode. So you can use and enjoy your watch without running a permanent spy application. This protects your privacy and the privacy of your friends. If you would say “I have nothing to hide”  think about Edward Snowdens reply to this argument:

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Edward Snowden

The app is available at Play store for free:
https://play.google.com/store/apps/details?id=de.udue.zim.bischoff.not_okay

The app needs the right to change audio setting.
Privacy statement for the App.
Enjoy it.

The app should be able to protect you against the curiosity of Google. I am unsure if this is enough to protect your privacy against the NSA, because the NSA has planned to hyjack the Google Play app store: https://firstlook.org/theintercept/2015/05/21/nsa-five-eyes-google-samsung-app-stores-spyware/

Speak, friend, and enter – Speech synthesis with eSpeak and SVOX-pico for the Raspberry Pi

March 25th, 2014

Speech synthesis is a nice feature for embedded systems and tiny computers not only for home automation. In case of lacking display a sound (speech) output may be a cheap alternative to provide the user with information. For example a mobile device, which gets an IP address via DHCP in changing environments is able to announce its IP via speech output even if no connection to the Internet is established. I have realized this feature before in 2006 to remotely control an ActiveMedia Pioneer 3AT mobile robot via web interface. In 2006 I have used Mbrola-TTS for this task.

 

 

My mobile Activemedia Pioneer 3AT Robot at University of Hagen in 2006, realized with MBROLA TTS

 

To realize a similar functionality for a raspberry Pi you can simply install eSpeak, an open source Text-to-Speech (TTS) software wich is available in raspbian out of the box. It supports over 20 langugages and has a small memory footprint. To install it type:

sudo apt-get install espeak

To test the speech synthesis simply type

sudo amixer cset numid=3 1

espeak -ven “hello world this is raspberry pi talking”

The alsamixer command is required to select the phone jack as output, default is the HDMI out.

Better speech quality with Android SVOX pico TTS engine

To get much better speech quality out of your Raspberry Pi I would recommend the open source Android SVOX pico TTS engine. It supports high quality speech generation in five languages (en [uk|us], de, fr, it, es). I am using the same TTS engine for my free Pediaphon Wikipedia TTS service. SVOX pico is the best open source TTS engine available for Linux.

Unfortunately there is no prebuild binary available in the Raspbian distribution. But is is possible to build it from source yourself or simply download my prebuild binary ARM deb packet for raspbian. (MD5 hash: b530eb9ff97b9cf079f622efe46ce513) and install it.

apt-get install libpopt-dev
sudo dpkg --install pico2wave.deb

To test it try
sudo amixer cset numid=3 1
pico2wave --lang=en-US --wave=/tmp/test.wav "hello world this is raspberry pi talking"; play /tmp/test.wav;rm /tmp/test.wav

Instruction to build pico2wave from source:
git clone -b upstream+patches git://git.debian.org/collab-maint/svox.git svox-pico
apt-get install automake libtool libpopt-dev
automake
./autogen.sh
./configure
make all
make install

To speak the IP address every time the PI boots up add the following commands to the /etc/rc.local :

/usr/bin/amixer cset numid=3 1
/usr/bin/espeak -vde "my I P address is $_IP"

Enjoy!

Raspberry Pi versus Cray XT 6m supercomputer – Raspberry Pi calculates MD5 hash collisions

August 31st, 2012

The Raspberry Pi is a small  ARM11 based board, (ARM1176JZF-S with ARMv6 instruction set), with  100 Mbit Ethernet port, HDMI,  analog video, GPIO pins, SPI, I²C, UART and two USB interfaces. The processor is identical to the  first genation Apple iPhone CPU.

Das kommt in der Verpackung, eine SD-Karte ist nicht dabei.
What you get – no SD card  included

 

The Raspberry Pi just costs  about 25-30 $ and was designed for educational use in mind. You can order the device in GB for 34 € with shipping to Germany and a T-shirt included. Because of the low power consumption (3,5 watt, fan less, no heat sink required) , its tiny size  (credit card form factor) and the  low price the Raspberry Pi is an ideal device for developing energy efficient solutions like NAS, routers and media centers. The Raspberry Pi uses am SD card as mass storage device, which can be deployed whith a proper Linux distribution like  Raspbian “wheezy”, a modified  Debian. The  distribution consist out of a modern Linux with a resource optimized  desktop and a slim Webkit based browser (Midori). Regardless the Raspberry was not designed for such a use case, it is fast enough to surf the Internet!

T-Shirt inklusive, der Raspberry Pi
T-shirt inclusive, the Raspberry Pi

 

The custom Linux distribution   Raspbian “wheezy” is strongly recommended because of the hardware floating point support for the ARM11 processor. The Debian ARM “arml” distribution (ARMv4t, ARMv5 and ARMv6 devices) version is  lacks support for  the hardware floating point capabilities of the Arm11 (ARMv6). The Debian “armhf” and the Ubuntu ARM  distributions are supporting  only  ARMv7 instruction set devices (minimum ARM Cortex A8).

 

To be sure to make use of hardware floating point set the following compiler options:

-mcpu=arm1176jzf-s -mfpu=vfp -mfloat-abi=hard

Otherwise the  floating point operations will be emulated in software, which is approximately 10 times slower.

 

ARM processor guide and Android Tablet buying hints

ARM processors, or on  a Chip  integrated ARM cores (System on a chip, SoC), driving mostly  all current Android smartphones and tablets. The Apple-A5 SoC in the iPhone and iPad are also based on ARM cores. Apart of the processor for the user environment (e.g. Android or IOS) there are several other ARM cores integrated in a modern smartphone. Smaller energy efficient ARM cores are used for instance in the radio device of the phone, which executes all the GSM/UMTS/2G/3G communication tasks. Most of today’s Bluetooth and GPS chip sets  contain a ARM core too. Likely your smantphone is equipped with  four  or more integrated ARM cores in different chip sets. The naming convention of the instruction sets of ARM processors should not  be confused with naming of the ARM architecture (see http://en.wikipedia.org/wiki/ARM_architecture). Also the custom identification of  ‘system on a chip’ (SoC) from different manufactures does not correspond to  the standard ARM architecture naming conventions. This site gives some hints which ARM core is integrated in different SoC of common vendors. The site is very useful to compare the performance of common SoC which are integrated in Android tablets. To give you a hint: Don’t buy a device with a ARM core less than Cortex A8. Tablets with an ARM Cortex A8 CPU are available nowadays for ~ 100 €. But expect an iPad 3 like performance form a device with at least ARM Cortex A9 (ARMv7) with several cores.

Der Desktop der Raspbian “wheezy” Distribution, Midori Webbrowser inklusive
Desktop of Raspbian “wheezy” distribution, Midori web browser

 

MD5 hash collision

To test the computing performance of the Raspberry Pi ARM11 processor (ARMv6) I did not choose a standard benchmark but the  MD5 Collision Demo from Peter Selinger.

It implements a algorithm to attack a MD5 hash to compute a collision. These collisions are useful to create a second (hacked) binary (or document) with an identical hash value of a original binary. The algorithm always starts with a random number to compute a hash collision. In case of more processes of the algorithm on more CPUs or cores with different random start values, the chance to find a collision increases.  The algorithm itself is not paralleled, but profits from different random start values.

 

PC versus …

This first run was done on a  single core Atom netbook (2 h 46 m). A 8 core Intel machine (two Xeon Quad Core processors) need only 16 minutes and 6 seconds to find a collision. Only one core found a collision after that time, the last core found a collision not before 3 hours. (8 times 100% CPU load,see pic. )

Das top-Kommando (1 drücken um alle Kerne zu sehen)
tho top command (press ’1′ to see all cores)

 

… CRAY versus ….

I have tested the Cray XT 6m supercomputer of the  University of Duisburg-Essen  in june 2010 with the same tasks before. I was limited to 300 of overall 4128 cores at that time.  One of the cores found  a hash value collision after 56 seconds. Oh the Cray it is easy to start such a job automatically on all available cores at a time.

Cray Supercomputer
Cray supercomputer at the University of Duisburg Essen

 

… Raspberry PI

The  small Raspberry Pi found a hash collision after  30 hours and 15 minutes. The algorithm is not a real benchmark, it is possible to get a disadvantageous random start value with bad luck. Two other runs ended each after 19 hours and 10 minutes and 29 hour and 28 minutes. But how the Rasberry Pi compares to the Cray in energy efficiency?

 

Preiswerter und leiser als ein Cray Supercomputer bei etwa gleicher Rechenleistung pro Energieverbrauch
Raspberry Pi – cheaper and not so noisy but slower than a Cray supercomputer. Surprisingly similar energy drain in relation to the computation power.

 

The two Cray Units at the University of Duisburg-Essen consume 40 kW each. To  dissipate the heat the climatisation needs the same amount of engine power. Over all power consumption of the plant is about 160 KW. Relative to the 300 cores used in the experiment the power consumption is about 11.6 KW. In 56 seconds the plan uses 0.18 KWh of electrical energy. The Raspberry Pi power consumption is about 0,0035 KW, after 30,25 hours the energy usage is  about 0.106 KWh. If I did not consider the energy for the climatisation, surprisingly the energy drain in relation to the computation power is more or less similar for both devices!

The new iWebkit Pediaphon interface for Android, iPhone, iPad and iPad

June 24th, 2011

The HTML5 audio tag is supported by iPhones, iPods and iPad since IOS3. Android supports HTML5 audio since version 2.3 gingerbread too. I was curious if the shared base of the browser in both worlds is usable to create a unique touch based interface for a web based application like the Pediaphon. Most of todays mobile web browsers (Android, Nokia S60, Palm Pre, Openmocko, … ) are relying of the free and open source Webkit rendering engine. Initially developed as rendering engine for KDE-KHTML later adopted by Apple and nowadays supported even by Google (Chrome and Android web browser) Webkit is a lean but powerful rendering engine not only for mobile devices. If you are reading this blog with a Mac you are actually using the Webkit rendering engine. A really lean Webkit based browser for Linux/Windows I can recommend is Midori.

I am a great fan of the concept to prefer web based standardized applications to native apps for proprietary devices for e- and m-learning purposes. Development resources are usually rare especial at universities. It is always a good idea to focus on standard reusable HTML component rather than wasting time an effort for proprietary development app development.

My first idea was to use the multi platform sencha touch framework for the Pediaphon (a text-to-speech service for the Wikipedia) touch interface. A colleague pointed me to iWebkit, which is originally targeting the iPhone but it performs also nice on other Webkit-based browsers, like Androids stock web browser. It provides a IOS like touch interface to all Webkit-based browsers, the web page look an feel is exactly like a native iPhone app. For the programmer or integrator iWebkit is a lean and simple solution, even rudimentary HTLM knowledge is sufficient to use it.

Here the result, a touch interface for the Pediaphon, which converts Wikipedia articles into speech and realizes audio output just with the HTML5 audio tag, no plugin is required for IOS>=3 and Android 2.3 Gingerbread. For Android 2.2, which supports flash on some devices, there is still a flash option. The Pediaphon mobile touch interface offers 5 languages so far, of course English is included ;-) .

Screenshots:

Try it here: http://i-e.pediaphon.de

Enjoy!

HTML5 geolocation with Openstreetmap and OpenLayers for Android, iPhone, iPAD and iPod

March 30th, 2011

Since I am very interested in the HTML5 geolocation feature and Openstreetmap (with the help of the great Openlayers project), I have coded a minimal solution for Android and iPhone in july 2010 (my blog in German language). Because of the large amount of Google hits I have embedded the example in an iframe into my blog now:

Try this link to use the map directly on your Android/IOS phone.

This example realizes a simple map with a marker at the users position in a webkit-based smartphone web browser. It can easily be extended to a simple moving map. It’s like Google maps mobile without Google (not really, runs on Android too and the embedded location provider in your Firefox PC browser is also Google. Believe me, just open about:config and filter for geo.wifi) ;-) And it is platform independent! Works perfectly on Android, iPhone, iPod touch and iPad too.

Here is the minimal source code for the example:

 
<html>
  <head>
    <title>HTML5 geolocation with Openstreetmap and OpenLayers</title>
    <style type="text/css">
      html, body, #basicMap {
          width: 240;
          height: 320;
          margin: 10;
      }
    </style>


    <script src="OpenLayers.js"></script>
    <script>
      function init() {
        map = new OpenLayers.Map("basicMap");
        var mapnik = new OpenLayers.Layer.OSM();
        map.addLayer(mapnik);
                        

        navigator.geolocation.getCurrentPosition(function(position) {       
            document.getElementById('anzeige').innerHTML="Latitude: " + position.coords.latitude + "   Longitude: " +
            position.coords.longitude + "<p>";
            var lonLat = new OpenLayers.LonLat(position.coords.longitude,
                                    position.coords.latitude)
                      .transform(
                                  new OpenLayers.Projection("EPSG:4326"), //transform from WGS 1984
                                              map.getProjectionObject() //to Spherical Mercator Projection
                                            );
                                            
            markers.addMarker(new OpenLayers.Marker(lonLat));
           
            map.setCenter(lonLat, 14 // Zoom level
            );
           
        });
        //map = new OpenLayers.Map("basicMap");
        //var mapnik = new OpenLayers.Layer.OSM();
        //map.addLayer(mapnik);
        map.setCenter(new
        OpenLayers.LonLat(3,3) // Center of the map
          .transform(
            new OpenLayers.Projection("EPSG:4326"), // transform from WGS 1984
            new OpenLayers.Projection("EPSG:900913") // to Spherical Mercator Projection
          ), 15 // Zoom level
         );
        var markers = new OpenLayers.Layer.Markers( "Markers" );
        map.addLayer(markers);
                             
            
      
      }
    </script>

  </head>

  <body onload="init();">
<center>
HTML5 geolocation: 
<br>
    <div id="basicMap"></div>
<br>HTML5 geolocation<br>
<br>with Openstreetmap and OpenLayers<br>
For Android Froyo,iPhone,iPAD,iPod
<br>
Your position estimated by browser geolocation API:<p>

<div id="anzeige">(will be displayed here)<p></div>
<a href="http://www.dr-bischoff.de">Andreas Bischoff</a>

<br>(view source to see how it works ;-)
</center>
  </body>
</html>

Update: There is a new version with moving map available, view the source code on my German blog.

Long awaited feature: HTML5 audio support in Android 2.3 gingerbread

February 21st, 2011

A long awaited feature: The Android 2.3 gingerbread webkit browser supports HTML5 audio naively like Apple iPad and iPhone!

Tested with the Pediaphon service on an emulated gingerbread device. On selected Android 2.1 and 2.2 devices like the Archos internet tablet 7.0 there is flash based audio support for the Pediaphon site too, but finally Android supports HTML5 audio naively.

android gingerbread supporting HTML5 audio in emulator

The Pediaphon at the CeBIT 2010 fair, Hannover, Germany 2.-6.3.2010, hall 9 / booth D06

February 17th, 2010

I am proud to announce that I will present the Pediaphon at the
CeBIT fair(Germany, Hannover 2.-6.3.2010, hall 9/booth D06) this year. I will present at the ‘Innovationsland Nordrhein-Westfalen’ booth, a presentation of all universities located in North Rhine-Westphalia. I will share my booth with the ‘mobile learning project’ of the University of Hagen (FernUniversität in Hagen). Because of the lack of time I will only be at the booth between 4th-6th March 2010.

At the ‘future talk” event I will present my work in a talk (Saturday, 6th March 2010, at 10.00, (hall 9, booth A30, German
language, I am sorry ;-)

Six more languages – the Pediaphon now supports Polish, Czech, Dutch, Italian, Swedish and Portuguese

April 27th, 2009

I am very proud to announce that six new languages are now supported by the Pediaphon, the computer generated speech interface to the Wikipedia:

  • Polish (Polski),
  • Czech
  • Italian (Italiano)
  • Swedish (Svenska)
  • Portuguese (Portugues)
  • Dutch (Nederlands)

Try it here.

Watch out for more languages! Coming soon!

Have fun!