nxfury

Musings of a *Nix Nerd

On Monday, June 22, 2020, Apple's renowned (infamous?) World-Wide Developer Conference took place. As usual, the announcement of new devices took place- along with a shocker that has the potential to kill off software ecosystems and shut down development efforts.

Okay, Okay. What The Heck Happened?

According to MacWorld and video of the actual conference, Apple plans to swap their CPUs with a new, custom in-house one... The same processor line found on the iPhones and iPads! These Processors are based on ARM CPU technology, but due to Apple's walled-garden stance they modified the design.

ARM and a Leg

In order to avoid this turning into a complaining rant, let's balance this out by first mentioning the benefits ARM provides:

  • ARM is based on a RISC architecture, allowing for lower power usage and improved performance (provided the software is properly written)
  • ARM is less expensive than Intel or AMD CPUs, theoretically reducing the cost of production.
  • ARM is found in tons of Internet-Of-Things and mobile devices, especially smartphones.

The Catch

Although these are amazing benefits and worth considering the switch in laptops and mobile devices, Apple isn't following the standard design. For all users may know, the machine language may differ from the original CPU. If so, it's impossible to write C/C++ or compile ANY third-party code without an Apple-supplied compiler. On top of this, hackers and developers have been struggling for YEARS to get Linux working on the iPhone and iPad and have always had hangups on the CPU and the hardware in the device that locks them out.

Since the new Macs will be using this same line of chips, running anything other than software condoned by Apple will be impossible. To further compound the issue, Apple has agreements with the U.S. Federal Trade Commission to ban the import of components for their devices. On top of this, Apple intentionally opted to solder the hard drive to the motherboard and removed the data recovery pins- now the only way to protect your info on Mac is to buy their services or get a backup drive.

The Problem

This activity poses several ethical and financial dilemmas for a potential buyer, seeking a new laptop or computer.

“We're Sorry It's Broken. Feel Free to Buy A New Mac!”

How many times have you heard this at the Apple store or a computer shop? Did you know that the majority of times a computer breaks, the repair normally will not cost more than $50-100 USD? By making devices impossible to repair, there's actual justification in making this claim. But then if it's impossible to fix or even recover your data, why buy it?

No Schematics For You!

Most people can agree that we all disagree on many things. However, most can all agree that major companies are not worthy of our trust in light of recent scandals. When a major laptop manufacturer like Apple switches to a custom in-house CPU, it becomes impossible to audit it's security without attempting to hack it and play the role of the “bad guy”.

Lockdown and Lock out

Apple has always been a “Walled-Garden” ecosystem in their systems, but allowed third party apps to run. Due to this new CPU, all third party software is entirely dependent on whether or not Apple chooses to release compilers for their architecture. Even so, will they apply licenses to the compilers? Will they be compliant with current Operating Systems standards? Nobody knows, and there's potential for the death of third-party apps on the new platform. At the very least, all third party software would have to be recompiled (or rebuilt) to be compatible with the new architecture. For some maintainers, they may never even bother and third party support will dwindle.

Solutions?

As we all know, companies are driven by profit and go back on their decisions if it means a lack of sales. This means that if users don't support their actions, then DON'T BUY THE PRODUCT!

On top of this, hacking and research communities should pick up the new Macs as they arrive, and deduce how everything works, inventing ways to enable compatibility with other software and (maybe?) hardware on these new devices.

Lastly, if this bothers you, spread the word and explain it to others so they understand the importance of having the ability to fix your own belongings.

Read more...

Sine waves... GASP!!! This is the voodoo magic of the world of wireless technology. Not really, but it is mathematical, and really fascinating and I can't stop geeking out about it.

So why is this so important? As it turns out these basic physics allow WiFi, AM/FM Radio, GPS, CB Radio, Bluetooth, headphones and more to interact with the world around us. Without further ado, onward forth!

What's Your Sine?

So a sine wave is literally just a measurement of frequency and amplitude, plotted on a graph. Frequency is measured in hertz (Hz), where 1 Hz is one repetition per second. Amplitude is measured in decibels- think of volume control versus pitch when listening to music. The legal maximum output of a US FM Radio Station is 80 DBm, the power of the Sun when measured is approximately 306 DBm, and the average conversation is about 50-65 DBm. When plotted on a line graph, one might get a signal that looks like this:

SineWave1 Assuming this signal repeats 1 milion times a second, we could say it operates at 1 MHz (1 Megahertz)

Amplitude Modulation, or AM, is when the DBm is altered to produce a data stream. The benefit of this is that it has extremely long range, but is very susceptible to interference from natural sources like lightning.

Frequency Modulation, or FM, is when the Hz is altered to produce a data stream. The range isn't as good as AM but requires intentional attacks (such as jamming) to interfere with the signal.

Prepare The Phase Ray Generator!!!

Only half kidding, phase in a signal is where the signal starts and how it's transmitting. For example, you can have a 1 Hz signal that has different peaks and valleys when graphed compared to someone else's.

SineWave2 Sine Waves have different degrees of phase, totalling to 360 like a circle. Above, the wave has been split into 90 degree chunks.

Enter phase shifting. This is commonly found in wireless networking protocols found in WiFi such as WPA and WPA2, in encrypted communications networks, and even in mobile phones. To phase shift a signal, all one needs to do is take a chunk from the beginning of the signal and slap it on the end, using the degrees measurement of the sine wave. a 180 degree phase shift would cause a complete inversion of the wave, shown here:

SineWave3

Phase Lock is used in encrypted communications by having both devices phase shifting until they reach the same phase. These devices can then exchange cryptographic keys and communicate, hop to a different frequency and exchange there or end the transmission if something's wrong.

Why You Gotta Be So Noisy, Bro?

Noise is when a transmission experiences interference, making the sine wave (when graphed) look all jagged and strange. Noise Cancellation is the application of various filters to weed out the garbage. “Active Noise Cancellation Technology” dates back to the 1980s-1990s, where Adaptive Filters (the predecessor to Machine Learning/AI) found use in the detection of data that didn't belong on the sine wave, and it would “smooth” out the transmission and reduce noise.

Since then, this technology has improved and can be found in headphones, music players and more.

Packetization

Packetizing something is basically when one treats the peaks of a sine wave as a binary “1” and the valleys as a binary “0”. Depending on the amount of time spent in the “1” or “0” state, we can transmit multiple of the same value (allowing us to transmit a stream of data). Due to this, we can send bits and bytes to other devices. But how on earth are we supposed to understand what's being sent?

Enter network protocols. The most popular one to exist is the TCP/IP standard, responsible for how the Internet behaves. In this, there is documentation on SPECIFICALLY how long each “packet” of traffic should be, and what should be contained inside. (Spoiler: it's basically destination and source info, the data, and a checksum to validate the info)

I haven't been entirely honest with you guys... This is actually what's taught in calculus as well as physics in colleges and universities. However, it's funny that it's so easy to understand and so amazing to see how it all works! Side note: the RTL-SDR is a great way to experiment with this, and I will most likely be writing a post on using it in the future.

Until next time!

Read more...

If you had a laptop in the 2000s or earlier, chances are that you'll remember the ThinkPads of old- the practically indestructible devices, with awesome keyboards and easy to customize. They were (and still are) widely seen as the go-to laptop for productivity due to it's utilitarian design choices. Fast forward in time, IBM sold the designs and schematics to Lenovo and the ThinkPad of old is no more... or is it?

The ThinkPad T430 is the first ThinkPad to have island-style chiclet keyboards, and to be the first in a long line of devices to ignore the tried-and-true design of 20 years. However, it's fully compatible with the T420 Keyboard, which provides the old keyboard design.

Things You'll Need

Below is a list of required items for this mod: – A T430 Thinkpad (of course!) – A working T420 keyboard for installation – A large white towel – A set of precision screwdrivers – Wire cutters – A file – needle-nose pliers

Setup

Take a white bath towel and spread it over my desk. This provides a nice, white surface to be able to see screws and helps prevent damage of components from unexpected dropping. Once that is done, put the ThinkPad (ThonkPad?) on the towel and open the lid of the device and flip it over, screen facing down. Lastly, pull out the tools and have them at the ready.

Violating The Keyboard

Wait, What? Yes, you heard correctly- it's time to take the wire cutters to the T420 keyboard. But hold on! Let's take care to tweak the right things.

So there are 4 tabs in the bottom stock T430 keyboard, and the T420 has 5 of them. Remove the one in the center, under the mouse buttons and file it smooth:

Deleting Center Tab Take this slowly! Don't destroy your mouse buttons!

With this completed, now the existing tabs at the bottom of the keyboard need to be modified to accomodate the ThinkPad chassis:

KeyboardMod2

KeyboardMod3 You will need your needlenose pliers and the file to obtain this shape.

With this all done, put the keyboard to the side and let's crack open the laptop.

Death To Ye Olde Keyboard!!!

Thinking ahead, we will want to temporarily remove the laptop's palmrest along with the keyboard. To do this, COMPLETELY open your laptop and flip it over, so it lies flat, and remove the battery. Then, use your precision screwdriver set to remove middle panel in the back and the usb port cover in the bottom right corner. Remember to save these screws, as they will be necessary for reassembly.

Now, we need to remove the screws that keep the chassis held together:

KeyboardMod4 The screws circled in red underneath the middle cover can be thrown away or reused, as reinstalling them will kill your classic keyboard. Otherwise, save the screws.

Now, flip the laptop over so the screen faces up and is fully opened to 180 degrees. Use a flathead screwdriver and pry the bottom of the installed keyboard forward. Once that is accomplished, there should be enough space to pry up on it, allowing you to remove the chiclet keyboard. It will be attached to the motherboard with a ribbon cable, you will need to detach this as well.

Now use the smallest flathead screwdriver (or a guitar pick if you care about avoiding scratching the plastic) and pry the palmrest away from the device. The touchpad is also connected to the motherboard via a ribbon cable and this will need to be reconnected upon assembly. Once this is removed, the palmrest will look something like this:

KeyboardMod5

Das Keyboard

Now it is possible to take the classic keyboard and install it into the palmrest, taking time to ensure a proper fit. Once this is accomplished, connect the touchpad and keyboard back into the motherboard. Now, reattach the palmrest to the laptop, making sure to apply pressure to the edges of the device. You should hear “click” sounds where you reconnect it. Don't worry, this is normal.

Finally, flip the laptop over one last time and reinsert the chassis screws. Lastly, reattach the USB port covers and the cover for the center, screwing them back in. Now re-insert the battery and flip the laptop right-side up.

ITS ALIIIIVE!!!

KeyboardMod6

The keyboard should work, but some keys will be swapped out of place and it won't behave 100% properly. It should be bearable for day-to-day use.

However, if you want to get it working fully, you can install the thinkpad-ec mod, found here: https://github.com/hamishcoleman/thinkpad-ec

Read more...

Hey guys, so I was doing some work with RHEL the other day and bumped into FlatPak... And ohhhhhhhhhhhhhhhh, the potential concerns with the software made me question why my client uses it.

If you've worked as a Systems Administrator or are familiar with Linux, you have most likely used Red Hat Enterprise Linux or Fedora Linux. On these systems, a package management system called FlatPak is on the rise. However, the security flaws and blatant lack of concern posed by their development team is astounding enough to possibly cause major blows to user privacy on their systems.

Flat What?

FlatPak was originally a revolutionary piece of software whose inception came from the package management problem in Linux. For those who are unaware, different Linux Distributions (systems) use different command-line utilities to allow the installation, upgrade and removal of software. Due to this issue, the people working at Red Hat figured it would be a good idea to make packages universal, and they birthed several projects like AppImage and FlatPak, which are all considered standard utilities in Fedora and Red Hat Linux installations.

It's Escaping!!!

FlatPak uses a site called FlatHub for installing packages, and almost all of them on the site have write permissions to the user's home directory- even if it's not necessary. So in theory, it's possible to simply add a program to FlatHub that executes the equivalent of echo "malicious_command" >> ~/.bashrc and suddenly get full access to the system.

Though the developers at Red Hat and Fedora claim FlatPak is sandboxed (contained) securely to avoid these problems, this is apparently not true.

Old Farts

FlatHub doesn't have the latest software, either. For example, Firefox is one full release version behind. This poses potential security concerns as vulnerabilities old software get fixed in new software releases. Though companies and organizations offer security patches to fix the vulnerabilities, a third party packager like FlatPak will probably be slow to apply them.

However, this is outside of the goal of FlatPak. The entire goal of the system is to allow for universal software and packages to be installed easily. For this to occur, they might be stuck using older versions. So it begs the question: Should a privacy minded individual even consider FlatPak or programs with similar goals?

Security Issue (un)Responsiveness

A couple years back, the developers of FlatPak considered CVE-2017-9780 a minor security issue- when in reality it was a full fledged local root exploit. What this means is any hacker who wanted to get root access a couple years back could just create a FlatPak app that contained the code to effectively set user id as root, and it would work. This allowed any hacker to distribute malicious software and get administrative access to Linux boxes running FlatPak. Their lack of concern is still shocking to this very day, and one could only hope that they have changed their attitude towards security.

HOWEVER...

FlatPak was only designed to be a universal software distribution tool. Even as such, this all poses a big question for whether or not Linux users should consider systems like FlatPak or Snap for installation of software... Since there's so many potential security concerns, is this worth using?

On top of this, there's already Linux Distributions such as Bedrock Linux that allow for installation of Linux software using multiple different package managers. This seems like a far more robust solution, but still isn't 100% of the way there yet.

So What Should I Do?

If you're okay with the security risks, continue to use FlatPak, Snap, or whatever universal packaging tool you use. But if you're not okay with them and you value your privacy, consider not using them or finding a solid alternative until the kinks get worked out.

References: https://github.com/flatpak/flatpak/releases/tag/0.8.7

https://www.cvedetails.com/cve/CVE-2017-9780/

Read more...

Retro Computer

For those who remember their vintage Mac Classic or Commodore 64, they also remember how they were heavily constrained to the likes of 256 kilobytes of RAM. Even in these conditions, programmers still had the ability to engineer the same sorts of software we use today.

In this era from the 1970s to the 1980s, we saw several major innovations- the first computers, UNIX, the first graphical desktops, word processing software, printing, and internetworking of devices via ARPANET (which would later become the internet).

So why is there a lack of major innovation at such a rigorous speed anymore?

Stale Innovation

This may be a hard pill to swallow for some, but the increased availability that high-end hardware provides lowered the barrier of entry into computer programming, thus decreasing the quality of code. Due to this, overall competency in the average software developer declines. Naturally, this affects the importance of a “new” innovation- what's the point of rewriting code if the rewrite is bound to have worse quality?

On top of this, large companies, universities and defense contractors no longer fund major innovators. Let's use a modern-day example: The OpenBSD Foundation. They're one of the many organizations dedicated to furthering the UNIX source code, with an extreme focus on producing a system that has secure and sane defaults. Ironically, they were the inventors of OpenSSH and sudo (currently used in almost every Enterprise network running Linux or UNIX). So why aren't they recognized? It all boils down to a saying I learned from my grandfather: “Nobody likes change- even if it helps them.”

Convenience Over Simplicity

Wait- don't these mean the same thing? Actually, no.

This is how American Heritage Dictionary defines these two words: Simple- Having few parts or features; not complicated or elaborate. Convenient- Suited or favorable to one's comfort, purpose, or needs.

For ages, programmers pursued simplicity as a way to provide stable, high-quality code that would run on virtually anything- even a toaster if one were so inclined. This old school of thought still exists, but is largely frowned upon with modern day programming paradigms.

For example, rapid prototyping has brought programming languages like Python to the forefront due to the convenience they provide and the ease of implementation in them. However, it's nearly impossible to produce efficient programs that guarantee stability across a wide variety of different platforms, as Python isn't yet implemented on as many platforms as languages such as C.

The truly sad thing about this is how it all ties right back to my first point on how it reduces competence among programmers.

The Attack Of The Public Domain

How is one supposed to train up a new generation of programmers for the enterprise world if there's no quality code to work on? It's a paradox, as large enterprise companies like Microsoft, Apple, and more make use of Open Source and Public Domain source code but rarely contribute anything that could help further the development of Open Source. In recent news, Microsoft introduced “DirectX 12 for Linux”, but in reality they only made a way to access the Application Programming Interface (API) available to Linux users. No source code was disclosed and it was explicitly added solely for their Windows Subsystem for Linux. According to U.S. v. Microsoft (2001), the Department of Justice found an alarming statement for Microsoft's internal marketing strategy known as “EEE”– Embrace, Extend, Extinguish. Embrace the idea as if they support it, Extend support for the idea, then Extinguish it by rendering it obsolete. Google and Apple have been known to engage in similar practices.

Herein lies the paradox- there's a lack of new enterprise source code to look at without paying a significant amount of money for. Due to this, there's a lack of large-scale scientific research being conducted in computing that's available to the public.

Lack Of Attentiveness

It's all our fault here... If you're from the 1990s, you may remember “Windows Buyback Day”, when Linux users protested outside Microsoft's headquarters about being forced to pay for a Windows license they don't even use.

20 years later, such noble ideas haven't been forgotten- they've been ignored and thrown on the proverbial backburner by the rest of society.

The Good News

Moore's Law is slowly becoming rendered obsolete. For those who are unaware of what this entails, Gordon Moore created this “rule of thumb” in 1965 that computing devices would double in capability, exponentially, every year. This turned out to be true until recently where manufacturers are reaching the physical limits of what they can fit on a circuit board.

This means that we're limited in terms of performance and in order to continue to maintain Moore's Law, we will be forced to go back to the days of old, writing high-quality software while retaining a large degree of performance.

Read more...

For those who haven't read my original post (located HERE, we will be picking up where we left off in the design and implementation of our very own cryptosystem by implementing a Cryptographically Secure Pseudo-Random Number Generator (CSPRNG).

What's The Point Of This?

For any of our cryptographic functions to have any use whatsoever, we need a stream of random information we can feed our algorithms so we can properly perform hashing functions, symmetrical and asymmetrical encryption.

The “secret sauce” behind encryption is that it looks like random, garbled data or can pose as data belonging to something else.

Why Can't I Just Use The random() Function?

Sadly, the random() function, as implemented in many languages, is not Cryptographically Secure and is actually rather predictable based on a variety of factors, like CPU performance and timings.

In the same vein, CSPRNGs are also rather methodical due to the nature of the hardware and need thorough testing.

Cool, So What Are We Going to Do?

Since efficiency and using stable code is key to making everything work properly, we will want to utilize an already existing library. In our case, we'll use Duthomhas's CSPRNG, located here. This will give us an API to work with, without worrying about how we'll generate random numbers.

But How Do CSPRNGs Work???

Generally, a CSPRNG would gather entropy (randomness) through a variety of sources, for example the difference in microseconds between keystrokes or the noise of a transmission, and use fancy mathematics like the modulus operator and exponentiation to produce much, much more entropic data quickly.

However, since we're lazy (Guilty is Charged!!!), we don't want to spend weeks developing a Cryptographically Secure Pseudo-Random Number Generator, we want to work on encryption.

Adding The API

The sample source code explains the usage of the C headers by adding the code and then using the following code sample to generate pseudo-random numbers:

 #include <stdio.h>
 #include <duthomhas/csprng.h>

 int main()
 {
   CSPRNG rng = csprng_create( rng ); // Constructor
   if (!rng)
   {
     fprintf( stderr, "%s\n", "No CSPRNG! Fooey!" );
     return 1;
   }

  long n = csprng_get_int( rng ); // Get an int

  double f;      // Get a double
  csprng_get( rng, &f, sizeof(f) );

  int xs[ 20 ];           // Get an array
  csprng_get( rng, xs, sizeof(xs) );

  void* p = malloc(20);     // et cetera
  csprng_get( rng, p, 20 );
  free( p );

  rng = csprng_destroy( rng );   // Destructor
  return 0;
}

This Code sample shows us that we can dynamically call csprng_get() to fill different data types with pseudo-random information. This will fit our needs immensely in a separate function, as such:

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

#include "duthomhas/csprng.h"

extern int csprng_get(CSPRNG, void* dest, unsigned long long size);
extern CSPRNG csprng_destroy(CSPRNG);

// Our implementation of the CSPRNG
void * randstuff(void * x) {
  CSPRNG rng = csprng_create( rng ); // create CSPRNG
  if (!rng) { 
    //if the CSPRNG fails to load
    fprintf( stderr, "%s\n", "No CSPRNG! Crap." );
    exit(1); //crash and return an error
  }

  csprng_get(rng, &x, sizeof(x)); // use CSPRNG
  rng = csprng_destroy(rng); // destroy the CSPRNG

  return x;
}

// Main function
int main(void) {
  char rand[50]; //To be filled with random garbage

  strcpy(rand, randstuff(rand));

  return 0;
}

This sample of code sadly isn't enough to get the CSPRNG working, as there's a slight flaw in the API we were provided in the csprng.c file. To fix this we need to swap the line that says #include <duthomhas/csprng.h> with #include "duthomhas/csprng.h" and everything should compile.

Alright, So What Did You Do?

We took sample code, and modified it to suit our purposes. Of course, our program doesn't yet work 100% and we're getting segmentation faults. However, this is a great first step as we're now able to produce pseudo-randomness at will.

To be more precise, we created a function called randstuff() that allows us to produce pseudorandom binary data at will. On top of this, we made the functions take the void * data type, allowing us to pass any form of data that we please into the function. This is somewhat dangerous and we'll need to explicitly convert our data to use the function later down the road... However, it provides us with the facilities necessary to create random information necessary for encryption.

Finally, since all this is placed into a separate function, we can call it anywhere we need randomness.

Be sure to stick around for Part 3, where we implement a hashing function!

Read more...

Odds are you might have seen this guy around on the internet: Tux The Penguin

This cute little penguin is actually the official mascot of Linux- one of the leading Operating Systems in existence, deployed on approximately 85% of all devices in the world. Due to it's stability and compatibility, the enterprise world craves Linux and loves it... What always made Linux unique?

About The Open Source Movement

in the 1970s until the mid-to-late 1990s, Open Source Software was termed “Free Software”, in the sense that source code or schematics were indeed free to the public to take a look at, modify and verify the legitimacy of products they purchased.

Folks who were never familiar with this might have heard of dedicating research work to the public domain for a clearer picture of what still goes on to this day.

Why Many Geeks Despise Paid Software

To understand why so many Linux and BSD lovers have an avid hatred for Windows and Mac, we shall backtrack to the history of the Apple computer. Waaaaaaaaaay back when dinosaurs roamed the earth- just kidding, cavemen did. Wait, wrong era. Back in the mid-1970s, a man named Steve “Woz” Wozniak would go on to invent, design and prototype the first Apple Computer. “The Woz” would go on to term it the “Apple”:

This computer was way ahead of it's time in the sense that it was the first computer that an individual could actually do work on, as most computers were prohibitively expensive, were oversized, or were just simply not user-friendly. Wozniak gave his first demo of the device at the Homebrew Computer Club, where an excited buzz filled the air as budding hackers got their own schematics sheets to build their own and attempt to run software on it. Among the members here was Bill Gates. According to former members, he appeared curious, but later on down the road he would go on to send this alarming letter:

Bill Gates Open Letter to Hobbyists

At the time, the developers who were working on furthering research made use of Altair BASIC, as it was in vogue at the time- an invention of Bill Gates and sold at astronomically high prices by some company named Micro-Soft. Naturally, people would copy the tapes and redistribute it. But why would Bill Gates go after people furthering research? To this day, geeks and open source advocates share a dislike for Microsoft simply due to the company's past stance on Open Source and Public Domain Research- They simply don't trust Microsoft. Apple isn't much better, as Steve Jobs would later go into business with Wozniak and alienate him from his own invention.

UNIX

Fast forward a few years and AT&T's Bell Labs would go on to invent the UNIX Operating System and the C Programming Language. Due to the sheer capabilities of UNIX, as it was written in C and not Assembly (like all other systems at the time), AT&T decided to market it. Adjusted for inflation, the cost for a copy of UNIX was approximately $10,000 USD, which included the source code and the system installation disks. The innovation of the system quickly meant that it took over in Universities and Military applications.

One of these universities was the University of California at Berkeley. They took their copy of UNIX and rolled their own tools into it, calling it BSD. They would go on to redistribute the source code for free, resulting in a lawsuit that AT&T lost- effectively giving the Open Source movement a jumpstart in the 1980s and 1990s.

Linux

During this lawsuit, a man named Linus Torvalds released a free kernel to the public, inspired by the UNIXes he wanted to have at home. He called this Linux, and development STILL continues to this day and the system is highly regarded amongst developers for it's stability and compliance with known standards (like POSIX).

Why Does All This Ancient History Matter?

Well... It's not ancient... Anyway, how is someone supposed to trust companies founded on the principle that dedicating source code to the public domain is a bad thing, and sharing of ideas for research and innovation is inherently bad?

Also, it's important to note that these companies still attempt to invade user's privacy to sell data to the highest bidder, they still use their customers as “guinea pigs” by rolling out updates for their unstable release software, and they still prevent public access to the source code so no one can attempt to fix these problems.

What makes users THINK they will change?

Alright, So What Should I Do?

If you are opposed to such unethical practices, then consider leaving Windows and Mac out of protest cease to use their products to avoid giving them money. After all, money speaks more than the mouth.

Also, if you support what the Open Source Community does consider participating or making your next personal project under an Open Source License: Choose An Open Source License

Finally, if you support certain open source projects definitely consider participating in them to assist in their growth. There's so many projects that are in desperate need of developers, artists, authors and more.

If you enjoyed this content, be sure to subscribe, leave a comment and tell your friends!

Read more...

Due to the sheer enjoyment of writing about enabling support for old Broadcom cards back in the day, it's time to share another horror story- Another fiendish story of what caused me to leave Fedora Linux, never to return. For the Dante afficionados, let's enter the proverbial “9 Circles of Dependency Hell”.

But I Just Wanted to Play Quake 3!

Too bad, so sad. I had the official Return to Castle Wolfenstein CD and the .run file to install the data to my Linux system, and wasn't aware of what it's dependencies were. For those who are unaware, a dependency is a bit of software that is needed by your program to run. Oblivious to what was needed, I mounted the CD and ran the installer. Little did I know that I would be in for days of work. The game launched and I was enjoying the WW2 Prison-Breakout glory of Wolfenstein.

No Games For You

So Fedora uses a package manager to aid in installing updates to software. The hitch is that it used to (and may still) upgrade everything without ensuring the possibility of being able to revert back to the previous state. My game depended on old versions of software to work, unbeknownst to me at the time.

I had automated the installation of updates for once a week and forgot about it months prior. Little did I know that the following Sunday morning, I would not be able to launch Wolfenstein because it was missing critical libraries.

Enter Dependency Hell

After doing some research about the packages I needed to launch Wolfenstein, I wound up downloading the rpm files that were of the correct version since I had known about what dependency hell was previous to this fiasco. But would it strike me? No, I was a sysadmin– I knew my way out of this! Squeezing my stuffed Tux- I mean penguin- I proceeded to install the rpm files using the rpm -ivh command. Little did I expect what would happen next...

Even the rpm package installer removed the pre-existing binary that I updated! So now, I couldn't launch my File Manager, VLC, LibreOffice, or GIMP- apps that I used regularly began crashing.

I ran yum -y update to revert the downgraded software and then Wolfenstein wouldn't start again. This is why I can't have nice things...

Doing The Unspeakable

I wanted to play Wolfenstein bad at this point, and realized I hadn't tried compiling from source with the old version. This involves taking the source code, performing some voodoo magic on it, and producing a binary. Generally, this is not supported by package managers and as a result is often ignored. So I thus embarked on a saga of making my Core 2 Duo (at the time) CPU scream bloody murder.

Tell Me More About Compilation!!! Fine... It's not voodoo magic, GNU Make automates the execution of various compilers in a specific order. A compiler in a nutshell is just a program that translates source code into another language. In most cases it translates into machine language- producing binary programs. Compilation is CPU-intensive, and just great for warming a home in the winter.....

Highway to (s)Hell

So after taking inventory of the various versions of software I needed to compile, from libSDL to Xlib, I started downloading the specific versions of source code for each application I needed. 2-3 hours later, I had a folder full of source code. Since I wanted this project over with, I used a while loop in bash to automate the extraction of all the tarballs into their own separate folders.

“You Could Roast a Marshmallow on That Thing!!!”

I made the willful choice of compiling the software I needed from source, and was going to see my project to completion. I would proceed to enter the first folder, run ./configure to generate the Makefile custom tailored to my hardware, then run make, followed by make install. For each application, this took about 30-45 minutes given the speed of the CPU, and the majority of the time was spent waiting on the computer to finish it's prescribed suffering- I mean compiling.

By the time I was done compiling, it was Tuesday morning, and my laptop was so hot I had plugged in an external keyboard and mouse to use it, with the device propped up on 2 textbooks to retain airflow for cooling.

Wasted Time

By the time I was completed, I grabbed a bag of my favorite chips- Jalapeño Flavored- and launched Wolfenstein. To a nerd's delight it launched and I muttered “IT'S ALIIIIIIIIVE!!!” to myself. When I began playing the game, however, my excitement was rendered useless- the audio was stuttering and the video was horribly choppy. I didn't meet the RAM requirements and would have to wait for an upgrade... :(

The Upgrade

In just a couple days, the RAM arrived!, I tore open the packaging and quickly added the RAM to my laptop. The game finally worked, in all it's glory! I was running, shooting 'em up, and defeating Nazi officers in a valiant attempt to save the world. However, the power brick that charged my laptop wasn't powerful enough to charge the device with the added RAM, so I waited on a close friend to snag me a spare charger from his old job, as they were closing the office and liquidating old hardware (I would later receive my second laptop from this closure).

Multiplayer Not-So-Awesomeness

As it turned out, multiplayer on Return To Castle Wolfenstein was widely considered one of the best parts of the entire game. So naturally, I wanted to try it out. However every time I attempted to join a multiplayer server I would get errors about “PunkBuster not working”. On Linux at the time, there was a lack of documentation on how to resolve this issue. I tried modifying the PunkBuster configuration, to no avail. On top of this, I even reinstalled Wolfenstein. Still, I couldn't play multiplayer until a patch for the game was released. I missed out on the peak of the multiplayer action because of this.

Nowadays, the multiplayer servers are down and the game is widely considered a good old game- one of the best video games ever made. And I missed my shot at enjoying it while it was still fresh because Fedora just wouldn't play nice. As a result, I wound up leaving Fedora Linux, never to return. Back then I left for Ubuntu but would later migrate over to Arch Linux. Little did I know I would even leave that for Gentoo, Slackware, and BSD for security... Which would later become a passion of mine.

I hope you enjoyed this content! If you do, following my twitter at @nxfury01 or subscribing to my email list at the bottom of this page will notify you every time I release a new post. Thanks for your support!

Read more...

Cryptography– it's always the hot debate topic regarding computers, with society trying to perserve it and ensure ciphers are extremely hard to crack, to aid in the preservation of privacy (thus ensuring free speech). Governments often oppose cryptographic ciphers because of their difficulty to crack, making investigations and research on other people harder.

However, there's no denying that such systems seem very arcane and tough to understand, and this series of posts intends to shed some light on how cryptographers implement systems that are extremely hard to crack.

This post series exists to help educate people on the importance of cryptographic research and how it corresponds to your privacy online, and how you can better protect yourself in a high risk environment. I would like to give credit where it's due, as I learned most of this content from “Applied Cryptography” by Bruce Schneier.

Like a Lightswitch: Boolean Logic

So let's quickly cram an intro to computer science class into a couple paragraphs to preface this all... Boolean Logic is just a fancy term for the ability to do math with nothing more than true or false statements and a few special operations. This is achieved through the use of the binary number system, which behaves very much like the decimal system in the sense that it has a “place” for digits of a certain value. However, instead of having a 1s, 10s, 100s, etc. place, the binary system has a 1s, 2s, 4s, 8s, etc place. A binary digit is called a bit and a number that is 8 binary digits long is called a byte.

Like normal math, we can do addition, subtraction, etc to the binary numbers... But we can do more than that since binary 0 is “False” anything other than 0 is “True”. We can use AND, OR, XOR, NAND, and NOT operations on our numbers now. AND, OR & NOT are all pretty self explanatory in how they work (they take inputs and you perform said operations on them). NAND stands for NOT AND, so you basically perform AND and then invert the output value. XOR will only output true if only one input is true.

Cryptography Basics

So what is cryptography? In the most perfect sense, a cryptographic function is an algorithm that can only be reversed using one method, and is impossible to recover the original contents using any other method. However this is often not the case, and this is why security experts say nothing is 100% secure, because there will always be unknown holes in your cryptographic functions and systems.

When cryptographic functions work through taking a message and a single “key”, performing a series Boolean operations and mathematical operations to use the same key to encrypt and decrypt the message, it is called a symmetric encryption algorithm. Some of the leading symmetric algorithms (in terms of security) are AES-256, CHACHA20 and SALSA20.

If there's 2 keys, one for decryption (called a private key) and one for encryption (called a public key), it is called an asymmetric encryption algorithm. Some of the leading asymmetric algorithms are RSA and EC-Diffie Hellman.

Finally a hashing algorithm is one that takes a message as input, performs a series of operations on it, and outputs a bunch of garbled information- but if you input the same message again, you will get the same output. This is common for storing passwords and login information. Common hashing algorithms are SHA256 and SHA512.

Keys require random numbers to be created, and often times cryptographic systems rely on programs to generate random numbers for keys. The ongoing problem is that computers are incapable of being random, so there is ongoing research to produce Cryptographically Secure Pseudo-Random Number Generator software (CSPRNG). Alternatively, some people opt for Hardware-based Random Number Generators (HRNG) for producing their crypto keys.

Planning our Cryptosystem

Let's say Bob and Alice want to email each other, but they fear Eve- our eavesdropper- might be listening in. How can we securely share secret cryptographic keys in such a manner that it's impossible for Eve to get them?

Using Multiple Systems

Using some code, it's entirely possible to stitch together multiple algorithms. So it's possible that we could send EC-Diffie Hellman encrypted messages, but encrypt our public and private keys with AES-512 encryption and a personal password. So it's not theoretically possible for “Eve” to intercept the encryption and decryption keys without having to trick Bob and Alice.

To do this, we need to understand what EC-Diffie Hellman keys go where. The public key encrypts the message, while the private key decrypts the message. So for this to work, Bob would need to have Alice's public key and his private key encrypted with AES-512, while Alice would need Bob's public key and her private key encrypted also with AES-512.

To simplify this... 1) Bob and Alice generate public and private keypairs 2) Bob and Alice swap public keys. 3) Bob encrypts Alice's public key and his private key. 4) Alice encrypts Bob's public key and her private key. 5) When they wish to email, they unlock their keys. 6) After unlocking their keys, they encrypt their messages. 7) To decrypt the message, Bob or Alice unlocks their keys. 8) They then use their private key to decrypt the message.

This seems rather complex, although most of the process is automated and running behind the scenes. Software like this would manifest itself as a “keychain” or “keyring” in major programs.

The Plan

The first step, which will be shared in the next post, will be to implement a CSPRNG and a hashing algorithm so we can generate keys.

The second step will be to implement a EC-Diffie Hellman cryptographic function, using hashing algorithm and CSPRNG to aid in the generation of keys.

The third step will be to implement AES-512, which will complete the cryptosystem, and allow for encryption of the keys.

The last milestone of this project will be to provide a simple and clean interface so an end-user can encrypt their emails.

References

Schneier, B. (2015). Applied cryptography: Protocols, algorithms, and source code in C. Indianapolis, IN: Wiley.

Read more...

Once upon a gloomy day, an innocent programmer (innocent? yeah, right...) stared at his Linux terminal in dismay only to find that the wifi card he installed wasn't supported, and he threw out the old one. This tale of woe documents my actual misadventures with the Linux kernel back in the days of Linux Kernel version 2.6 or so in 2004.

Tell-tale signs

I remember getting my old 2007 Dell XPS right when it came out from a third party seller and how he swapped the included WiFi card with one that was absolutely horrible for the time. Since a replacement card was cheap, I invested in a replacement card for the laptop, which was a Broadcom card.

After a bit of waiting and checking the mailbox constantly, it had arrived and I excitedly popped open the laptop and inserted the card... And threw out the old one- welcome to hell...

I booted my Ubuntu installation, complete with wobbly windows, and fired up bash and excitedly ran ping google.com. The happy smile quickly turned to an analytical frown, wondering why this could be. Running ifconfig didn't list the new wlan card either...

Google-Fu

Since I knew this was a Broadcom WiFi card, I plugged into Ethernet (which thankfully worked) and began a massive googling spree. After a couple hours of searching for “Broadcom Wifi not working Linux”, “Linux Broadcom support”, and so on it was discovered that I needed to utilize a package called ndiswrapper, which effectively allows the loading of Windows XP firmware for wireless card to be loaded under Linux through a wrapper.

NDISWrapper Hell

Excited that there was a solution, I downloaded the firmware for Windows XP and installed the ndiswrapper package. After adding blacklist bcm43xx blacklist b43 blacklist b43legacy blacklist ssb to /etc/modprobe.d/blacklist.conf, I was ready to install the driver. I installed the driver in a Windows XP virtual machine and obtained the .INF file that corresponded to the broadcom card. From there, I believed it to be a simple sudo ndiswrapper -i broadcom.inf to install the driver.

But lo and behold, the driver wasn't written for my CPU Architecture, and the installation failed every single time! Out of desperation, I even experimented with QEMU to see if I could just emulate the driver, to no avail.

After 3 days of banging my head against a desk, rebooting, restoring from a backup and more, I gave up entirely on NDISWrapper and turned back to Google.

The Discovery

After a couple more hours of Google-Fu, I stumbled upon a discovery- A developer was working on a patchset for the same WiFi card I had, and these changes weren't in the kernel! For those who are unaware, patchsets are groups of .patch files you can run the patch command on, to modify source code to look like what the developer made. Excitedly, I downloaded this and a fresh copy of the Linux Kernel source code.

Kernel Games

With everything downloaded, extracted the Linux kernel source code and cd'ed into it. Following this, I ran patch < 1.patch over and over again, but changing the name of the path of patch until I had applied the entire set. Then I executed cp /boot/config-uname -r.config to copy the stock Ubuntu kernel config to the .config file required for compiling a new kernel. After this, it was just a matter of running make menuconfig to customize the kernel and enable the Broadcom driver. After saving and exiting, it was time to compile the kernel.

After running make deb-pkg LOCALVERSION=-broadcom I sat and waited... For 18 hours. After waking up the next day, I noticed compilation completed. As expected according to the Debian manual, the .deb files were one directory up. So after verifying they were there, the kernel was then installed via dpkg and a reboot verified the custom kernel was installed.

It Works, But...

After installing the custom kernel, WiFi was finally working. However, the speed did not increase and the performance wasn't as advertised. This led me to believe it was due to a hardware limitation and I had wasted all this time over a stupid WiFi card...

I later would just install an Ethernet wall jack where I kept my laptop because I wanted the speed at the time, and this laptop would go on to last until 2012, when it gave off magical blue smoke.

Moral of the story: DON'T THROW AWAY GOOD HARDWARE BEFORE YOU TEST!

Read more...

Enter your email to subscribe to updates.