Self signed wildcard security certificate for apache or nginx

This tutorial is done on a debian 11 system… it should work for wildcard (For all subdomains under a domain), but also for subdomains or the primary domain, obviously, all you need to do is replace the * which denotes wildcard with the subdomain of your choice, so *.qworqs.com is wildcard, yazeed.qworqs.com is a subdomain 😉 so let us get started

Let’s encrypt has certainly revolutionized the world of SSL certificates (By making them free), but when it comes to wildcard certificates, let’s encrypt will require more than just generating the certificate, it will require a system that automatically alters DNS at your registrar, and differs from registrar to registrar.

So while I am developing, and need a wildcard SSL, I can simply generate a self signed wildcard security certificate, and teach my browser to accept it, and that is that, so here is how to generate that certificate !

So let us get started, first let us create a public and private key in one go, and a folder to store them !

cd /etc/ssl
sudo mkdir qworqs.com
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/qworqs.com/wildcard-ss.key -out /etc/ssl/qworqs.com/wildcard-ss.crt

Now that we have the public and private key, we also need a strong Diffie-Hellman group… this file though goes somewhere else in the nginx directory

sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096

Now you are done with creating everything you need, the next step is to install them into your nginx configuration

this can be done directly in the website’s configuration files, since this is a wildcard certificate that is expected to be installed in multiple nginx files, it is a good idea to group then so that you can add one line to all the config files, this will serve you when the certificates expire and you need to renew them !

Types of documents in software development and who writes them and for who

Every company has it’s own procedures, and sometimes it’s own standard for the following documents, but this is the most common, the order is loosely relevant to prerequisites and chronological order

I have put them down in a table to simplify

In this document, a client refers to a party that receives the code, (any of the steakholders), Implementation lead and developer refer to the programmers, system analyst refers to a system analyst.

Doc
Abbrv.
Document descriptionWho writes itWho is it written for
SOW
Statement of workProject management, Chief Information Officer, third-party contractor, So from the developers perspective, any client such as the aboveImplementation Lead
MRDMarketing requirements documentMarketing departmentAll steakholders including Implementation lead
URD
URS
The user requirements document
User requirements specification

This document is basically the client outlining the features the developers are to implement
Project management with help from system analysts (Clients)Implementation lead and any relevant stakeholders
SRSA software requirements specification sheet
A description of a software system to be developed, laying out functional and non-functional requirements- or features
This document bridges the gap between user/client and developer
Also serves as an agreement
Business Analyst, System Analyst, and developers
TRDTechnical requirements document

Written by the developers based on the requirement documents submitted by the client, this is an extensive document that connects functionality, features, and purpose together, creating this document is a very lengthy process and it requires “Technical writing skills” as it is meant to convey the whole system to non technical steakholders.
FSDFunctional specification document
FRDFunctional Requirements Document
PRDProduct requirements document.

This document communicates the capabilities the product will need.
SRDSoftware Requirements Document
written statement of what the software will do or should do.
FRSFar more detailed than an SRSImplementation lead or System Analyst.
Product RoadmapTimetable
Product backlogIt is the prioritized list of task-level details needed to execute the strategic plan outlined in the product roadmap.
Sprint BacklogDrawn from the product backlog, this is the list of cross-functional team plans to work on in the next sprint.
SDSoftware documentation
A user’s manual (Not for the developers)
Documents relevant to software development

Debian 11 on laptop with nVidia optimus

This is a short one, just a quick referance to have your system optimized for both efficiency and performance

This laptop which has had a fresh Debian Bullseye installed has both an nVidia card alongside the intel card, to find out what cards your system is running, you could start with the command

sudo lspci

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 3D controller: NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)

Or you could simply get the relevant data with

lshw -c video

This is a 7th generation intel CPU, namely the “Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz”

Let us start by installing the nVidia drivers from nvidia, in my case, i installed the detection script followed by nvidia-drivers package from the non-free repositories

If your CPU is post 2007, Make sure you do not install “xserver-xorg-video-intel”, if it is already installed remove it ! we want xserver-xorg-core to manage the intel graphics

Install nvidia primus

apt-get install primus

Once that is setup, to figure out which card is being used as the main one, you can run

glxinfo|egrep "OpenGL vendor|OpenGL renderer"

To execute the command with the nvidia card and check if it is being used, execute it in the following way

primusrun glxinfo|egrep "OpenGL vendor|OpenGL renderer"

Now, without the primusrun you should get intel, with the primusrun, you should see nvidia

Now, you know the system is taking care of what is running which application correctly, I will follow with more information once i have the time.

To instruct the system to use your nvidia card to play a video, you can execute something like

primusrun totem

(this will force it to use your nvidia card)

Another command that will show you the utilization of your nvidia GPU (Mine is mostly 0 percent as I am not running anything with a primusrun prefix)

nvidia-smi

Moving away from Laravel smoothly (Pros and cons)

A short while back, I was handed a repository with code written in Laravel, incomplete, and somewhat sketchy. with the purpose of taking a look at the code and deciding whether i would take it or not.

To give you the lowdown FIRST, while first researching Laravel, I started by investigating the limitations, my first google search sent me in the direction of a blog post by Beau Beauchamp, a developer who seems familiar with the framework.

I obviously didn’t take his word for it, I don’t really know who he is, so i gave myself a 2 day intensive Laravel course, unfortunately he was somewhat right.

The following two paragraphs are from his blog post, they don’t tell you much, but are better explained as I go

Laravel prides itself as the framework for “artisans”. The impression is that Laravel is the framework for people who don’t really know how to code and don’t want to learn. I get it.

Laravel is not PHP, per se, it uses an “expressive” syntax or what has been coined as “syntactic sugar” to hide things from you that it thinks “artisans” don’t need to worry about.

Having no experience with Laravel, and plenty of experience in PHP, the 2 day course i mentioned earlier left me with the following impressions, I was Impressed by how massive it is (Implementing plenty of features with very few lines), impressed by how simple and easy it is (Truly made for people who don’t want to learn programming). and thinking that this is basically a great framework for a simple straight forward website, but once you are looking to give the website more edge, a competitive advantage, or creating complex functionality, the framework is pretty restrictive and not so scalable.

yes, caching can help with the scalability part, but the degree caching helps with depends on the nature of the website, and for this particular purpose, it is not a perfect solution.

So should we throw the existing code away ?

My answer is NO, if i do end up taking this job, I plan to launch with the Laravel code, then extend the software with good old plain old PHP, with the database acting as the API between the new system and the old system, this way, the website owner can have a functional website where he can promote and advertise, and dip his toes in the water while a different system slowly takes this system’s place as it gets developed.

After updating the existing code from Laravel 7 to Laravel 9 (Overhead), running a security audit for the code, A varnish or nginx proxy should sit in the middle, and new code should run transparently through the proxy, when that happens, I am not even restricted to the same virtual machine running Laravel, I can have 2 virtual machines running different tools acting as one website, transparently, without the user ever knowing.

The front end with react

The other issue I have with this project is with react and react-native, which are the front ends of both web and mobile applications.

React is a very cool framework, but there is quite a bit of controversy around it and around Ajax in general when it comes to Search Engine optimization (SEO), in a statement by google ten years ago, googlebot is now able to read a website the same way a web browser does, and I have seen that they do see it that way from ten years ago when they were providing tools telling people what pages on their websites were having horizontal scroll bars, but regardless of that statement, the fact that most websites that appear in my search results are not Ajax, and that HTML and CSS still run most of the popular websites does raise some concerns, entering into a very competitive market dictates that every inch of a competitive advantage is vital to our success.

So, first let me get the advantages and disadvantages of Laravel out of the way, then get into the technicalities and how the new system should co-exist with Laravel and react.

Pros

  • Laravel is a very mature framework, but very opinionated, opinionated means the designers of the framework expect you to create your website in one specific way, and as long as you are within those lines, you can make things work, what mature and popular means is that when you don’t see those lines, someone online has probably mentioned how to do it with Laravel.
  • Laravel is not the greatest in backwards compatibility, so when a new release comes out, it is not just PHP that you need to worry about, it is also Laravel, and from people’s experiences online, things tend to either break or become buggy when a major release of Laravel is out
  • Laravel is heavy, very heavy, and to deal with that, the developers have come up with workarounds, mainly caching, which lends itself to certain websites more than others, sometimes caching can have so little benefit, and sometimes it is a magic recipe for super snappy
  • Laravel is based on symphony, and works great with react, but even though google has claimed that their spider treats

WordPress does not load correctly (SOLVED) behind nginx/varnish reverse proxies !

Here is my problem, I have a website, and in a directory in that website, I have a wordpress installation, and that installation opens correctly, loads all the images, css, js and any other files for a proper experience, only problem is, when you put this behind a varnish reverse proxy, and an nginx reverse proxy for SSL (https), the website design (theme) does not load, you only see the actual html page that was loaded, but all other elements are never fetched from the server, i actually sniffed the data and found that css, javascript, and images are never even requested !

So the short of this story, if you are having problems with page loading without the theme or design, and you have a similar setup, odds are the problem is with wordpress settings not with nginx or with varnish !

So a closer look at the page source reveals that the page was loaded with https, but the links to all the page resources are in HTTP ! why is that ? simple

when you open the website in SSL, your browser creates a secure connection with nginx (termination), nginx requests the page from varnish, which relays the page again to the server.

As far as the web server serving wordpress is concerned, this request came in http, not https ! so all the page resources should be in http right ? yes, this is what is happening, but what is the solution

I tried a few solutions, for example, i changed the wordpress address and site address to httpS, but wordpress is smart enough to use whatever protocol the user accessed and use it for all resources !

there are many solutions programmatically, which is something i avoid because i update wordpress, and don’t want to fix it every time i upgrade, so whatever solution i need has got to be in the only file that is never modified when upgrading wordpress, the config file

wordpress knows it is on SSL from the following two entries in the environment, the entries $_SERVER[‘HTTPS’] and $_SERVER[‘SERVER_PORT’], the proxy sends a hint that the user used https with the variable $_SERVER[‘HTTP_X_FORWARDED_PROTO’] in the request header, hence, adding the following code snippet somewhere in the beginning of the config file should deceive wordpress into thinking it has been accessed over https !

if($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'){
   $_SERVER['HTTPS'] = 'on';
   $_SERVER['SERVER_PORT'] = 443;
 }

Hope this all works okay for you, if not please let me know in the comments and i would be more than glad to help

Recovering data from a failed 3TB seagate ST3000DM001

My Seagate ST3000DM001 failed me, it was no longer detected by the BIOS, when the PC starts, you can feel the disk spinning, and the head moving in the usual way, but after spending a minute waiting for it to be detected on the post screen, the computer simply ignores it (Does not detect it) and boots without it. meaning the operating system does not detect it !

Before i blame Seagate (I prefer western digital in general), this drives is more than 5 years old, was only used for storage, and never ran any software, but still, more than 5 years old.

2- Diagnosis

the most likely cause seems to be the PCB board as the BIOS does not detect the disk altogether, nonetheless, I have had excellent results with the freezer trick before (Even though the freezer trick is not something that is suitable for this type of malfunction, the freezer helps with mechanical issues often denoted by unhealthy sounds coming from the drive). so I froze it (Within a bag to avoid condensation), and tried it a day later, but absolutely nothing was different, no surprise there.

I also, for no reason whatsoever, removed the LID and took a look inside, no idea what i was expecting to find, but i did anyways, everything looks normal inside, and hopefully no significant dust went in there.

So i decided it was most likely the board, considered this my diagnosis, and will now act accordingly

3- Work

3.1 – Find a donor board

Before looking online for a board, I took a look at the drives I had at home, turned out I do not have two of the same drive, but i do have a 2TB ST2000DM001 which had the exact board (100687658 Rev: C) ! obviously, the BIOS on the board is different between the two boards, so that has to be flipped from one board to the other (Basic soldering skills required), but otherwise, the boards are identical between the 2TB and the 3TB, I might end up losing both in this operation, but getting the data out is certainly worth the try

3.2 – Copy the data from the donor 2TB drive to a third drive (western digital 2TB drive)

To begin with, I started by finding a similarly sized hard drive to copy the data that resides on the donor disks before i take its board out, luckily, I found a western digital green drive of identical size and sector size, namely a (Western Digital Green WD20EARX), this third disk is to make sure i don’t lose any data from the 2TB donor drive, so here is how it is done

After connecting both disks to a Linux PC, I identified which disk is which using the fdisk command

fdisk -l

Now that i know which one is the source and which the destination, I started the process of copying the data from the donor (2TB), to third disk (The western digital).

by moving the data on the 2TB drive (The healthy one) to a similarly sized drive , this copy procedure is the simplest task, with both connected to a Linux machine I used my favorite cloning tool (Nop, not DD, I switched to PV the moment i first tried it).

pv < /dev/sdd > /dev/sdf

Data moved (Backed up) from the donor drive (Donating it’s controller board) to an empty drive to hold the data

Now, All i can do is wait for 4:30 hours (According to PV), then come back, take the drives out, and start the surgery. it is copying at 115MB/s probably because the WD is a green drive that uses SMR recording.

Now that it is done copying, I took out the boards (Few screws), de-soldered the bios chip as you can see in the video, and soldered the one from the 3TB board onto the donor board and the one from the 2TB board onto the presumably malfunctioning board

the disk BIOS chip is the one branded winbond and has 8 pins (usually the only chip with 8 pins).

out of curiosity, I connected the 2TB drive (Now With the bad board after the swap), and it worked ! this is defiantly bad news, the problem was not the board after all ! connecting the 3TB disk yielded the same old problem exactly !

miniDLNA on my WD mybook live NAS box

The original firmware based on debian did come with a DLNA server, in this post, I am only dealing with openWRT (You need to change the firmware due to a serious security issue on the NAS drive as shown here)

In openWRT, i recommend you install libffmpeg-full before miniDLNA, since miniDLNA will install libffmpeg-mini which will conflict

opkg update
opkg install libffmpeg-full
opkg install minidlna luci-app-minidlna

Once the above are done, you can setup your DLNA server from the web interface of openWRT

I would recommend that the database and log files be on the data partition to save space, something you can manually set from within the web interface

to rebuild that database (re-index the files), you will need to stop miniDLNA, run the update and then start the server again

service minidlna stop
minidlnad -R
service minidlna start

Blank page running PhpMyAdmin on nginx with PHP 8.1 pfm

After an apt-get upgrade, phpmyadmin stopped working, I would see a blank page that sets a session cookie but does not show a login screen , just a blank page and the source of the page is also blank

So, i added the following line to my config.inc.php

$cfg['environment'] = 'development';

An right after, the following appeared

Array ( [type] => 1 [message] => Uncaught TypeError: PhpMyAdmin\ConfigStorage\Relation::__construct(): Argument #1 ($dbi) must be of type PhpMyAdmin\DatabaseInterface, null given, called in /var/www/html/pma2/libraries/classes/Twig/RelationExtension.php on line 22 and defined in /var/www/html/pma2/libraries/classes/ConfigStorage/Relation.php:62 Stack trace: #0 /var/www/html/pma2/libraries/classes/Twig/RelationExtension.php(22): PhpMyAdmin\ConfigStorage\Relation->__construct() #1 /var/www/html/pma2/vendor/twig/twig/src/ExtensionSet.php(426): PhpMyAdmin\Twig\RelationExtension->getFunctions() #2 /var/www/html/pma2/vendor/twig/twig/src/ExtensionSet.php(411): Twig\ExtensionSet->initExtension() #3 /var/www/html/pma2/vendor/twig/twig/src/ExtensionSet.php(385): Twig\ExtensionSet->initExtensions() #4 /var/www/html/pma2/vendor/twig/twig/src/Environment.php(810): Twig\ExtensionSet->getUnaryOperators() #5 /var/www/html/pma2/vendor/twig/twig/src/Lexer.php(457): Twig\Environment->getUnaryOperators() #6 /var/www/html/pma2/vendor/twig/twig/src/Lexer.php(108): Twig\Lexer->getOperatorRegex() #7 /var/www/html/pma2/vendor/twig/twig/src/Environment.php(466): Twig\Lexer->__construct() #8 /var/www/html/pma2/vendor/twig/twig/src/Environment.php(516): Twig\Environment->tokenize() #9 /var/www/html/pma2/vendor/twig/twig/src/Environment.php(348): Twig\Environment->compileSource() #10 /var/www/html/pma2/vendor/twig/twig/src/Environment.php(309): Twig\Environment->loadTemplate() #11 /var/www/html/pma2/libraries/classes/Template.php(123): Twig\Environment->load() #12 /var/www/html/pma2/libraries/classes/Template.php(156): PhpMyAdmin\Template->load() #13 /var/www/html/pma2/libraries/classes/Core.php(145): PhpMyAdmin\Template->render() #14 /var/www/html/pma2/libraries/classes/Config.php(684): PhpMyAdmin\Core::fatalError() #15 /var/www/html/pma2/libraries/classes/Common.php(169): PhpMyAdmin\Config->checkPermissions() #16 /var/www/html/pma2/index.php(48): PhpMyAdmin\Common::run() #17 {main} thrown [file] => /var/www/html/pma2/libraries/classes/ConfigStorage/Relation.php [line] => 62 ) 

Turns out, this is an incompatibility (related to session storage) that has only been fixed in the 5.2 snapshot, download that version of phpmyadmin and everything should be fine

Fixing my android SD card (exFat) on linux

This is probably very easy to use on windows, but i could not find a windows machine,

A quick solution that was not very clear at first was as follows, before you do this, make sure you have unmounted the SD card !

apt-get install exfat-utils

And then run the command

sudo exfatfsck /dev/mmcblk0p1

Got a few of the following error, and answered yes to all

ERROR: unknown entry type 0xc1.
Fix (Y/N)? y

And that was that

Varnish would not listen on port 80 on debian 11

This is a somewhat old problem, since Debian moved to systemD, instead of editing the file in /etc/default/varnish, you will need to create a file in /etc/systemd/system/ named varnish.service, the contents of such a file should look like this, xxx.xxx.xxx.xxx is the IP varnish is listening on, one of the IPs of your varnish server

So to run the following command

systemctl edit varnish.service
[Unit]
Description=Varnish HTTP accelerator
Documentation=https://www.varnish-cache.org/docs/6.1/ man:varnishd

[Service]
Type=simple
LimitNOFILE=131072
LimitMEMLOCK=82000
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a xxx.xxx.xxx.xxx:80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
ExecReload=/usr/share/varnish/varnishreload
ProtectSystem=full
ProtectHome=true
PrivateTmp=true
PrivateDevices=true

[Install]
WantedBy=multi-user.target

Once you have added the file execute the following

systemctl daemon-reload
systemctl restart varnish