Fixing my android SD card (exFat) on linux

This is probably very easy to use on windows, but i could not find a windows machine,

A quick solution that was not very clear at first was as follows, before you do this, make sure you have unmounted the SD card !

apt-get install exfat-utils

And then run the command

sudo exfatfsck /dev/mmcblk0p1

Got a few of the following error, and answered yes to all

ERROR: unknown entry type 0xc1.
Fix (Y/N)? y

And that was that

Docker Cheat Sheet

Like the name implies, this is a cheat sheet to quickly find the command you need, they are ordered by the frequency a command is used, or at least what i think is going to be needed more frequently, I have also grouped them by function

The container name in the examples is mycontainer, it is just a name that you will need to replace with your own container name, the container ID here is always 12345abcdef

CommandArgumentsWhat it does
============>Containers – list
docker container lsDisplay running containers
docker container ls -aa: also show containers that are not runningDisplay all containers, running or not
docker psShow running containersPS is the same as LS but older
============>Containers – Run
docker run --name mycontainer -i -t imagename1- The name of the container to run (mycontainer)
2- The i flag indicating you’d like to open an interactive SSH session to the container. The i flag does not close the SSH session even if the container is not attached.
3- The t flag allocates a pseudo-TTY which much be used to run commands interactively.
4- The base image to create the container from (imagename).
Runs the container, and leaves you on a shell prompt that executes commands on that container (As if you have ssh-ed into it)
docker run --name mycontainer -d imagename-d for running the container in the background
docker stop my_containerStop the running container
docker exec -it mycontainer /bin/bash-it flag allows you to run a container in interactive modeIf this doesn’t work, you may not have bash installed, you can try the next command

Gives you access to the shell, much like opening an SSH session to the container
docker exec -it username/mycontainer /bin/sh
ctrl+p followed by ctrl+qDetach from container

Sometimes, accessing a container throght the command line may not be enough, there is a chance you want to access it for file transfer for example, in that case, you want port 22 exposed, and you want to be connected to it like you would connect to a virtual machine

Debian 11 on laptop with nVidia optimus

This is a short one, just a quick referance to have your system optimized for both efficiency and performance

This laptop which has had a fresh Debian Bullseye installed has both an nVidia card alongside the intel card, to find out what cards your system is running, you could start with the command

sudo lspci

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 3D controller: NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)

Or you could simply get the relevant data with

lshw -c video

This is a 7th generation intel CPU, namely the “Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz”

Let us start by installing the nVidia drivers from nvidia, in my case, i installed the detection script followed by nvidia-drivers package from the non-free repositories

If your CPU is post 2007, Make sure you do not install “xserver-xorg-video-intel”, if it is already installed remove it ! we want xserver-xorg-core to manage the intel graphics

Install nvidia primus

apt-get install primus

Once that is setup, to figure out which card is being used as the main one, you can run

glxinfo|egrep "OpenGL vendor|OpenGL renderer"

To execute the command with the nvidia card and check if it is being used, execute it in the following way

primusrun glxinfo|egrep "OpenGL vendor|OpenGL renderer"

Now, without the primusrun you should get intel, with the primusrun, you should see nvidia

Now, you know the system is taking care of what is running which application correctly, I will follow with more information once i have the time.

To instruct the system to use your nvidia card to play a video, you can execute something like

primusrun totem

(this will force it to use your nvidia card)

Another command that will show you the utilization of your nvidia GPU (Mine is mostly 0 percent as I am not running anything with a primusrun prefix)

nvidia-smi

Moving away from Laravel smoothly (Pros and cons)

A short while back, I was handed a repository with code written in Laravel, incomplete, and somewhat sketchy. with the purpose of taking a look at the code and deciding whether i would take it or not.

To give you the lowdown FIRST, while first researching Laravel, I started by investigating the limitations, my first google search sent me in the direction of a blog post by Beau Beauchamp, a developer who seems familiar with the framework.

I obviously didn’t take his word for it, I don’t really know who he is, so i gave myself a 2 day intensive Laravel course, unfortunately he was somewhat right.

The following two paragraphs are from his blog post, they don’t tell you much, but are better explained as I go

Laravel prides itself as the framework for “artisans”. The impression is that Laravel is the framework for people who don’t really know how to code and don’t want to learn. I get it.

Laravel is not PHP, per se, it uses an “expressive” syntax or what has been coined as “syntactic sugar” to hide things from you that it thinks “artisans” don’t need to worry about.

Having no experience with Laravel, and plenty of experience in PHP, the 2 day course i mentioned earlier left me with the following impressions, I was Impressed by how massive it is (Implementing plenty of features with very few lines), impressed by how simple and easy it is (Truly made for people who don’t want to learn programming). and thinking that this is basically a great framework for a simple straight forward website, but once you are looking to give the website more edge, a competitive advantage, or creating complex functionality, the framework is pretty restrictive and not so scalable.

yes, caching can help with the scalability part, but the degree caching helps with depends on the nature of the website, and for this particular purpose, it is not a perfect solution.

So should we throw the existing code away ?

My answer is NO, if i do end up taking this job, I plan to launch with the Laravel code, then extend the software with good old plain old PHP, with the database acting as the API between the new system and the old system, this way, the website owner can have a functional website where he can promote and advertise, and dip his toes in the water while a different system slowly takes this system’s place as it gets developed.

After updating the existing code from Laravel 7 to Laravel 9 (Overhead), running a security audit for the code, A varnish or nginx proxy should sit in the middle, and new code should run transparently through the proxy, when that happens, I am not even restricted to the same virtual machine running Laravel, I can have 2 virtual machines running different tools acting as one website, transparently, without the user ever knowing.

The front end with react

The other issue I have with this project is with react and react-native, which are the front ends of both web and mobile applications.

React is a very cool framework, but there is quite a bit of controversy around it and around Ajax in general when it comes to Search Engine optimization (SEO), in a statement by google ten years ago, googlebot is now able to read a website the same way a web browser does, and I have seen that they do see it that way from ten years ago when they were providing tools telling people what pages on their websites were having horizontal scroll bars, but regardless of that statement, the fact that most websites that appear in my search results are not Ajax, and that HTML and CSS still run most of the popular websites does raise some concerns, entering into a very competitive market dictates that every inch of a competitive advantage is vital to our success.

So, first let me get the advantages and disadvantages of Laravel out of the way, then get into the technicalities and how the new system should co-exist with Laravel and react.

Pros

  • Laravel is a very mature framework, but very opinionated, opinionated means the designers of the framework expect you to create your website in one specific way, and as long as you are within those lines, you can make things work, what mature and popular means is that when you don’t see those lines, someone online has probably mentioned how to do it with Laravel.
  • Laravel is not the greatest in backwards compatibility, so when a new release comes out, it is not just PHP that you need to worry about, it is also Laravel, and from people’s experiences online, things tend to either break or become buggy when a major release of Laravel is out
  • Laravel is heavy, very heavy, and to deal with that, the developers have come up with workarounds, mainly caching, which lends itself to certain websites more than others, sometimes caching can have so little benefit, and sometimes it is a magic recipe for super snappy
  • Laravel is based on symphony, and works great with react, but even though google has claimed that their spider treats

WordPress does not load correctly (SOLVED) behind nginx/varnish reverse proxies !

Here is my problem, I have a website, and in a directory in that website, I have a wordpress installation, and that installation opens correctly, loads all the images, css, js and any other files for a proper experience, only problem is, when you put this behind a varnish reverse proxy, and an nginx reverse proxy for SSL (https), the website design (theme) does not load, you only see the actual html page that was loaded, but all other elements are never fetched from the server, i actually sniffed the data and found that css, javascript, and images are never even requested !

So the short of this story, if you are having problems with page loading without the theme or design, and you have a similar setup, odds are the problem is with wordpress settings not with nginx or with varnish !

So a closer look at the page source reveals that the page was loaded with https, but the links to all the page resources are in HTTP ! why is that ? simple

when you open the website in SSL, your browser creates a secure connection with nginx (termination), nginx requests the page from varnish, which relays the page again to the server.

As far as the web server serving wordpress is concerned, this request came in http, not https ! so all the page resources should be in http right ? yes, this is what is happening, but what is the solution

I tried a few solutions, for example, i changed the wordpress address and site address to httpS, but wordpress is smart enough to use whatever protocol the user accessed and use it for all resources !

there are many solutions programmatically, which is something i avoid because i update wordpress, and don’t want to fix it every time i upgrade, so whatever solution i need has got to be in the only file that is never modified when upgrading wordpress, the config file

wordpress knows it is on SSL from the following two entries in the environment, the entries $_SERVER[‘HTTPS’] and $_SERVER[‘SERVER_PORT’], the proxy sends a hint that the user used https with the variable $_SERVER[‘HTTP_X_FORWARDED_PROTO’] in the request header, hence, adding the following code snippet somewhere in the beginning of the config file should deceive wordpress into thinking it has been accessed over https !

if($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'){
   $_SERVER['HTTPS'] = 'on';
   $_SERVER['SERVER_PORT'] = 443;
 }

Hope this all works okay for you, if not please let me know in the comments and i would be more than glad to help

Recovering data from a failed 3TB seagate ST3000DM001

My Seagate ST3000DM001 failed me, it was no longer detected by the BIOS, when the PC starts, you can feel the disk spinning, and the head moving in the usual way, but after spending a minute waiting for it to be detected on the post screen, the computer simply ignores it (Does not detect it) and boots without it. meaning the operating system does not detect it !

Before i blame Seagate (I prefer western digital in general), this drives is more than 5 years old, was only used for storage, and never ran any software, but still, more than 5 years old.

2- Diagnosis

the most likely cause seems to be the PCB board as the BIOS does not detect the disk altogether, nonetheless, I have had excellent results with the freezer trick before (Even though the freezer trick is not something that is suitable for this type of malfunction, the freezer helps with mechanical issues often denoted by unhealthy sounds coming from the drive). so I froze it (Within a bag to avoid condensation), and tried it a day later, but absolutely nothing was different, no surprise there.

I also, for no reason whatsoever, removed the LID and took a look inside, no idea what i was expecting to find, but i did anyways, everything looks normal inside, and hopefully no significant dust went in there.

So i decided it was most likely the board, considered this my diagnosis, and will now act accordingly

3- Work

3.1 – Find a donor board

Before looking online for a board, I took a look at the drives I had at home, turned out I do not have two of the same drive, but i do have a 2TB ST2000DM001 which had the exact board (100687658 Rev: C) ! obviously, the BIOS on the board is different between the two boards, so that has to be flipped from one board to the other (Basic soldering skills required), but otherwise, the boards are identical between the 2TB and the 3TB, I might end up losing both in this operation, but getting the data out is certainly worth the try

3.2 – Copy the data from the donor 2TB drive to a third drive (western digital 2TB drive)

To begin with, I started by finding a similarly sized hard drive to copy the data that resides on the donor disks before i take its board out, luckily, I found a western digital green drive of identical size and sector size, namely a (Western Digital Green WD20EARX), this third disk is to make sure i don’t lose any data from the 2TB donor drive, so here is how it is done

After connecting both disks to a Linux PC, I identified which disk is which using the fdisk command

fdisk -l

Now that i know which one is the source and which the destination, I started the process of copying the data from the donor (2TB), to third disk (The western digital).

by moving the data on the 2TB drive (The healthy one) to a similarly sized drive , this copy procedure is the simplest task, with both connected to a Linux machine I used my favorite cloning tool (Nop, not DD, I switched to PV the moment i first tried it).

pv < /dev/sdd > /dev/sdf

Data moved (Backed up) from the donor drive (Donating it’s controller board) to an empty drive to hold the data

Now, All i can do is wait for 4:30 hours (According to PV), then come back, take the drives out, and start the surgery. it is copying at 115MB/s probably because the WD is a green drive that uses SMR recording.

Now that it is done copying, I took out the boards (Few screws), de-soldered the bios chip as you can see in the video, and soldered the one from the 3TB board onto the donor board and the one from the 2TB board onto the presumably malfunctioning board

the disk BIOS chip is the one branded winbond and has 8 pins (usually the only chip with 8 pins).

out of curiosity, I connected the 2TB drive (Now With the bad board after the swap), and it worked ! this is defiantly bad news, the problem was not the board after all ! connecting the 3TB disk yielded the same old problem exactly !

miniDLNA on my WD mybook live NAS box

The original firmware based on debian did come with a DLNA server, in this post, I am only dealing with openWRT (You need to change the firmware due to a serious security issue on the NAS drive as shown here)

In openWRT, i recommend you install libffmpeg-full before miniDLNA, since miniDLNA will install libffmpeg-mini which will conflict

opkg update
opkg install libffmpeg-full
opkg install minidlna luci-app-minidlna

Once the above are done, you can setup your DLNA server from the web interface of openWRT

I would recommend that the database and log files be on the data partition to save space, something you can manually set from within the web interface

to rebuild that database (re-index the files), you will need to stop miniDLNA, run the update and then start the server again

service minidlna stop
minidlnad -R
service minidlna start

Varnish would not listen on port 80 on debian 11

This is a somewhat old problem, since Debian moved to systemD, instead of editing the file in /etc/default/varnish, you will need to create a file in /etc/systemd/system/ named varnish.service, the contents of such a file should look like this, xxx.xxx.xxx.xxx is the IP varnish is listening on, one of the IPs of your varnish server

So to run the following command

systemctl edit varnish.service
[Unit]
Description=Varnish HTTP accelerator
Documentation=https://www.varnish-cache.org/docs/6.1/ man:varnishd

[Service]
Type=simple
LimitNOFILE=131072
LimitMEMLOCK=82000
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a xxx.xxx.xxx.xxx:80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
ExecReload=/usr/share/varnish/varnishreload
ProtectSystem=full
ProtectHome=true
PrivateTmp=true
PrivateDevices=true

[Install]
WantedBy=multi-user.target

Once you have added the file execute the following

systemctl daemon-reload
systemctl restart varnish

Shrinking a disk partition under Debian 11 bullseye

As usual, I will start by getting to the bottom of it, then explain everything

first, you need to first shrink the file system, then the partition where the filesystem resides, replace /dev/sda4 with whatever you partition is named

1- Shrinking the filesystem

Unmount the partition to be resized,

umount /mountpoint

otherwise you will get a message such as

Filesystem at /dev/sda4 is mounted on /mountpoint; on-line resizing required
On-line shrinking from 30453104 to 98098 not supported.

The following commands are relevant to the program resize2fs, they are hands on examples of use, take a close look at the description of what each does before you proceed by picking how you want to use the command.

* Show the minimum size we can squeeze this partition to without losing data
resize2fs -P /dev/sda4
* do the filesystem resize to the MINIMUM possible size (the number you ended up with in the previous command)
resize2fs -M /dev/sda4

The command above moves all data to the beginning of the filesystem/drive, then shrinks it to the smallest possible size.

2- Shrinking the partition

2.1- Find the boundaries of the file system with fdisk

3- You are DONE

If this is it, why is there much more in this tutorial, Simply put, what is above does very little explaining, if you want to understand what we did, you will need a bit more

the assumption, I have a partition that only has 5% data, I would like to shrink the partition to ten percent of it’s size.

Unlike windows, where your luck of where the data resides, you can always shrink a Linux partition to whatever size fits the data that is on it (without losing data)

in this tutorial, I will assume the partition is /dev/sda4, you will need to replace that with whatever your partition is.

1- collecting information about our partition

fdisk /dev/sda
then the p command for print

df -h
this should show you all the partitions, info about them and where they are mounted and how much space is used

the file system can be shrunk with resize2fs

the command “resize2fs -M” should first move the data to the beginning of the drive, then shrink it

first, how large is the file system ATM
tune2fs -l /dev/sda2 then multiply by block size

New firmware for my Western Digital “My Book Live” NAS storage device

The WD My Book Live is a NAS device based on Debian Linux, Since Debian stopped supporting this processor, the device has received no updates and will probably never, so the next best thing to do in my opinion is to install openWRT.

Before you start

1- Only the first few paragraphs of this tutorial (STEPS 1 THROUGH 6) are the instructions you need, the remaining is just for extra reference and in short you don’t need to read it to have your device running, but I do recommend YOU SKIM THE WHOLE THING BEFORE YOU START.
2- This procedure requires you to take the disk out and install it on a PC to switch the firmware, then put it back
3- The upgrade will delete all your data, You will need to move your data that is already on your WD NAS drive somewhere else while the upgrade is ready.

Step 1: Move any existing data BEFORE TAKING APART.

Move any data you may have on the drive to a temporary location outside the NAS drive. this has to be done before taking the drive apart as the unconventional 64 kB block size of the disk will be nothing but trouble if you want to extract the data while mounting the disk to a linux PC for example.

Step 2: Take the disk apart

I have included photos to help you do that, it is not rocket science.

Step 3: Mount the disk on a linux PC (Windows and MAC should work)

and mount it to a linux PC (Windows might work with software such as etcher, but i have no guarantees).

Step 4: Download the openWRT firmware

Go to the drive’s page on the openwrt website (Here), and download it to your Linux (Or windows) PC

Step 5: Write the firmware to the disk.

Decompress the file, then copy it to the drive with a command similar to the command below, but make 100% sure to replace sdx with your own drive designation

 dd if=/root/wdsata.img of=/dev/sdx bs=64k

Write the firmware to the disk, overwriting it, and effectively loosing any data you did not backup in step 1

Step 6: Put the drive back in the enclosure

Nothing to say here, this is the reverse of step 2

Step 7: Create the data partition

At this stage, your device will boot, but you will need to create/expand the data partition, the partition that should not be overwritten when you upgrade the firmware for example.

You are done.

FAQ about the original firmware

What is that vulnerability about

it comes from WDs cloud service, bottom line is that many devices were completely wiped remotely by malicious users and it is unknown if the data itself leaked, so yes, it is very serious

What is the difference between quick factory restore and full factory restore

Quick factory restore is probably what you are looking for, the later seems to do a zero fill on the hard drive after performing a factory restore to disallow data retrieval (For example before you sell it), you can verify this by logging in using SSH, and by the fact that the tool tips state something to that effect.

Inspecting the device

To begin with, I logged in via SSH and inspected some stuff, to enable SSH access on the My Book Live original firmware, you will need to visit a page at a URL such as http://mybooklive/UI/ssh or http://192.168.2.116/UI/ssh (Replace the IP with your own)

the system is based on the following CPU

CPU
processor       : 0
cpu             : APM82181
clock           : 800.000008MHz
revision        : 28.130 (pvr 12c4 1c82)
bogomips        : 1600.00
timebase        : 800000008
platform        : PowerPC 44x Platform
model           : amcc,apollo3g
Memory          : 256 MB

With that out of the way, A look at /etc/apt/sources.list revealed that it is a Debian Distro, the only problem with this is that debian stopped supporting this CPU some time ago, so you can’t go past Debian 8 (Jessie)

deb http://ftp.us.debian.org/debian/ squeeze main
deb http://ftp.us.debian.org/debian/ wheezy main
#deb-src http://ftp.us.debian.org/debian/ wheezy main
#deb http://ftp.us.debian.org/debian/ sid main

Checking the disk info with hdparm revealed that the disk is a WDC WD20EARX-00PASB0, which is as i expected a Caviar Green (SMR disk)

parted (The new fdisk so to speak) shows the following partition scheme for the existing system.

Model: ATA WDC WD20EARX-00P (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system     Name     Flags
 3      15.7MB  528MB   513MB   linux-swap(v1)  primary
 1      528MB   2576MB  2048MB  ext3            primary  raid
 2      2576MB  4624MB  2048MB  ext3            primary  raid
 4      4624MB  2000GB  1996GB  ext4            primary

And a “df -h” reveals

Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              1.9G  555M  1.3G  31% /
tmpfs                 5.0M     0  5.0M   0% /lib/init/rw
udev                   10M  6.7M  3.4M  67% /dev
tmpfs                 5.0M     0  5.0M   0% /dev/shm
tmpfs                 100M  4.6M   96M   5% /tmp
ramlog-tmpfs           20M  4.5M   16M  23% /var/log
/dev/sda4             1.9T  2.1G  1.9T   1% /DataVolume

A good alternative for this Gigabit Lan network attached storage might be openWRT, the same firmware I use for my routers !

there are things you need to know in advance though, first of which is that changing the firmware will require you to delete everything on the drive ! as Western Digital have used an unconventional bunch of things such as a 64 kB block size !

With that out of the way, you can skip down to the installing openWRT about the upgrade process step by step (Including backing up your system), then come back to why etc…

What if i want to revert back to the WD software ?

That is indeed a good question, and to make it easy to do that, I have already backed up the entire disk to another while I am sure that i don’t want to go back. Also worth mentioning that the latest firmware on the WD website dates back to 2015 ! which is at the time of writing 6 years ago !

Where can i find the up to date openWRT distribution for this drive ?

OpenWRT has a page dedicated to this drive, both the single and the Duo here (https://openwrt.org/toh/western_digital/mybooklive)

What are the benefits of the NAS box (enclosure), why not just take out the hard drive and put it in a PC somewhere.

The Western Digital My Book Live has a super low power CPU, and when the disk is spun down, it consumes very little energy (Not a significant load to your UPS for example), It is also fan-less, so it is with the exception of the spinning drive when it is spinning silent, which is also a nice thing, So i would argue that keeping it by updating it’s software is a good idea

Another reason is the amount of relevant software provided through openWRT packages, covering many more things than the original firmware (miniDLNA included).

How do i keep the system up to date

If you come from a debian background, you would normally apt-get update then apt-get upgrade and that is that, in OpenWRT, there is no such upgrade command, the upgrade command in openWRT is meant to upgrade 1 package specified by name, so the solution is the following line

 opkg list-upgradable | cut -f 1 -d ' ' | xargs -r opkg upgrade