Before diving in the list, I must mention that as an Amazon Associate I earn from qualifying purchases.
Coppertone, water proof, SPF50 HERE
Before diving in the list, I must mention that as an Amazon Associate I earn from qualifying purchases.
Coppertone, water proof, SPF50 HERE
Let’s encrypt is a Certificate Authority (CA) run by Internet Security Research Group (ISRG), and is sponsored by some of the biggest name in the web industry
You are probably here to create a certificate, not get a history lesson ! so Let me cut the chase, for those who want to know more, there is always wikipedia (Let’s encrypt on Wikipedia)
So let’s encrypt provides certificates for domain names, including wildcard certificates (Which I will get to by the end of this article), What we are going through here is the manual process, which serves to give you a taste of how things work, in practice, you are encouraged to use on of the automated methods for multiple reasons, one compelling such reason is that Let’s encrypt issues certificates valid for three months only ! You don’t want to have to cater to your certificate every three months do you ?
To simplify things, I will create a step by step video to demonstrate the creation process ! and post it here, but for now, I will simply take you through the steps, in this tutorial, all you need is SSH access to any server including one you have at home ! or even maybe a virtual machine running Linux inside your windows computer, anything goes, once you have a certificate, you can move it to your production server, this allows me to keep this as general as possible, and this is done using the –manual option, So without further ado, let me get to it
1- login to a linux server and install certbot, the tool that allows you to get certificates from let’s encrypt, On the official website, they promote the use of SNAP, here, I will skip snap and use Debian’s repository ! simpler and there is no need to get into snap
apt install certbot
Now that you have certbot, let us create a certificate for the domain example.com (replace it with your own)
certbot certonly --manual --preferred-challenges http
The –preferred-challenges directive allows you to specify what challenge (http or dns) you would like to perform, the manual plugin is basically the same as webroot plugin but not automated, which is a hassle to keep up to date as this form of issuance needs to be renewed manually every 3 months, (You can take extra steps to automate this) which i will describe later on another post to keep things tidy
Now, as soon as you enter the above, you will enter an interactive dialogue with the following steps
Note: If you want to create a wildcard certificate for your domain name, let’s encrypt allows the use of the * wildcard, but only supports DNS challenge, so the command must reflect that, So when asked for a domain, simply enter *.example.com (or -d ‘*.example.com’), should work normally
As soon as you are in, you will be asked 1- An email for notifications 2- Do you agree to the terms of service ? 3- Would you like to subscribe to the newsletter ? 4- enter your domain names (you should enter both example.com and www.example.com separated by either a comma or a space) 5- Create a file containing just this data: Pg1xJ.........-88 And make it available on your web server at this URL: http://example.com/.well-known/acme-challenge/Pg1...........xuu_0 6- Now you need to create the 2 challenge files, one for exmaple.com and the other for WWW.example.com Create a file containing just this data: Ud4m81x..............zupbWEz-88 And make it available on your web server at this URL: http://www.example.com/.well-known/acme-challenge/Ud4........550 (This must be set up in addition to the previous challenges; do not remove, replace, or undo the previous challenge tasks yet.) -------------------------- IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/example.com/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/example.com/privkey.pem Your certificate will expire on 2023-03-11. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le
At this stage, there are things you should remain aware of
1- DO NOT RENAME OR MOVE THE CERTIFICATES, they need to be in place for renewal if you decide to not automate and check on your certificates every 3 months.
2- Copy (Don’t move) them to the ssl directory, and add them to your config files, the only files you will need to include in your nginx or apache2 config are as follows
For apache 2, you need to use the following 2 lines, modify the path to the files to wherever you have placed them
SSLCertificateFile /etc/apache2/ssl/example.com/fullchain.pem SSLCertificateKeyFile /etc/apache2/ssl/example.com/privkey.key
And for nginx
ssl_certificate /etc/nginx/ssl/allspots.com/fullchain.pem; ssl_certificate_key /etc/nginx/ssl/allspots.com/privkey.pem;
So, restart apache or nginx, and you should be able to see the certificate in action, so this is the simplest way to use let’s encrypt, in my next post, I will
Now, after 3 months, the simplest way to renew the certificate is to issue the command
certbot certonly --force-renew -d example.com www.example.com
Whenever i get the following message
mount /dev/sdd1 /hds/sgt2tb The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Falling back to read-only mount because the NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting.) Could not mount read-write, trying read-only
The command
ntfsfix /dev/sdd1
resolves the issue, and produces the following message
Mounting volume... The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. FAILED Attempting to correct errors... Processing $MFT and $MFTMirr... Reading $MFT... OK Reading $MFTMirr... OK Comparing $MFTMirr to $MFT... OK Processing of $MFT and $MFTMirr completed successfully. Setting required flags on partition... OK Going to empty the journal ($LogFile)... OK Checking the alternate boot sector... OK NTFS volume version is 3.1. NTFS partition /dev/sdd1 was processed successfully
The same mount command you see here will now work flawlessly
mount /dev/sdd1 /hds/sgt2tb
I am still unsure what process from the mentioned above is responsible, as this oftentimes pops up on drives that were never system drives, so there is no hibernation file problem
You can do this in many ways, the most popular of which is SAMBA, but this is not the software we are using, here we are using SSHFS
The software this post is about is SSHFS, if you are reading this, you probably know what SSH is (Secure shell), and FS stands for File System
Ironically, you will only need to have SFTP and not SSH with shell access, so here is the first surprise, Now, to continue with this tutorial, you might want to visit the page I have posted here to create that user and give him/her access to the directory to be mounted, don’t worry, there is a link back here at the bottom of that page !
So, now that you have created that user account on the remote system, let’s get down to business
You will need 2 peices of software, or 3 if you would like to use private/public key authentication
For the following software, look on their websites for the latest installers for your version of Windows (Usually you are looking for the msi of the 64bit version of windows)
1- WinFsp, short for Windows File System Proxy, What this basically does is enabled the developer of SSHFS-Win to make it look like a windows drive, not some separate SFTP application where you have to move the files manually, when you present it as a drive, you can modify files directly on it, which is the main advantage, and it will do the work in the background, it is a driver that presents itself on/to windows as a disk, while cheating the disk contents from another application, the github page for it is at https://github.com/winfsp/winfsp, or to save you time, Just go directly to the download page here https://github.com/winfsp/winfsp/releases/tag/v1.11 , When presented with optional components, if you are not a developer, you will only ever need the Core package, which is the installer’s default
Once WinFsp is installed, we are done with the part that allows us to display file systems that are not really filesystems, the next step is to have something feed that with data from an actual filesystem somewhere else ! via SFTP, and that software would be
2- SSHFS-Win, which is the system that sits in the middle, between the SFTP server, and WinFsp which is an illusion of a hard drive on your windows machine ! it’s home on github is at https://github.com/winfsp/sshfs-win, To get the latest from this one, go here https://github.com/winfsp/sshfs-win/releases and look for the one that says latest (Not pre-release), download and install it
There is no software to install on the remote side, as most Linux systems already have the functionality ! and you have already setup a user in the previous post that I pointed you to a minute ago, So let us mount !
Now, you can (But don’t do it just yet) open file explorer in Windows, right click “This PC”, and click on Map Network Drive, A dialogue appears, enter your connection string, which should be something like
\\sshfs\username@serverhostname\
You should then be prompted with a password dialogue box, you enter the SFTP password, and you should now be all set, but why are we not doing this right now ? we are not doing this because when you create files in that drive, they will remotely have rwx permissions for owner, and no permissions for group or others, wo work around this, you need to pass the following arguments to the mount
webdev@10.10.20.41:/ create_file_umask=0000,create_dir_umask=0000,umask=0000,idmap=user,StrictHostKeyChecking=no
which is only available via command line and does not survive reboots, a better alternative is to use sshfs-win-manager, which seamlessly mounts those remote file systems using SFTP , the long and short of it is that it just works
Another program that has a different set of permission issues (I can write files, but can’t write to them again even though i own the files on the remote system and the permissions should allow) is SiriKali (https://github.com/mhogomchungu/sirikali), you should be able to find the line to download for your platform here (https://mhogomchungu.github.io/sirikali/)
SiriKali also allows you to use other types of authentication which are beyond the scope of this post
So in SiriKali, you need to fill the above information, luckily that information is loaded by default.
Remember to select the checkboxes you need,
Every company has it’s own procedures, and sometimes it’s own standard for the following documents, but this is the most common, the order is loosely relevant to prerequisites and chronological order
I have put them down in a table to simplify
In this document, a client refers to a party that receives the code, (any of the steakholders), Implementation lead and developer refer to the programmers, system analyst refers to a system analyst.
Doc Abbrv. | Document description | Who writes it | Who is it written for |
SOW | Statement of work | Project management, Chief Information Officer, third-party contractor, So from the developers perspective, any client such as the above | Implementation Lead |
MRD | Marketing requirements document | Marketing department | All steakholders including Implementation lead |
URD URS | The user requirements document User requirements specification This document is basically the client outlining the features the developers are to implement | Project management with help from system analysts (Clients) | Implementation lead and any relevant stakeholders |
SRS | A software requirements specification sheet A description of a software system to be developed, laying out functional and non-functional requirements- or features This document bridges the gap between user/client and developer Also serves as an agreement | Business Analyst, System Analyst, and developers | |
TRD | Technical requirements document Written by the developers based on the requirement documents submitted by the client, this is an extensive document that connects functionality, features, and purpose together, creating this document is a very lengthy process and it requires “Technical writing skills” as it is meant to convey the whole system to non technical steakholders. | ||
FSD | Functional specification document | ||
FRD | Functional Requirements Document | ||
PRD | Product requirements document. This document communicates the capabilities the product will need. | ||
SRD | Software Requirements Document written statement of what the software will do or should do. | ||
FRS | Far more detailed than an SRS | Implementation lead or System Analyst. | |
Product Roadmap | Timetable | ||
Product backlog | It is the prioritized list of task-level details needed to execute the strategic plan outlined in the product roadmap. | ||
Sprint Backlog | Drawn from the product backlog, this is the list of cross-functional team plans to work on in the next sprint. | ||
SD | Software documentation A user’s manual (Not for the developers) | ||
If you are a web developer, you probably understand that OAuth (2) is how you allow your visitors to login to your website using their facebook, twitter, or even github credentials (Too many to name).
The uncontested champion of a plugins to log in users to your website using social networks is Laravel Socialite, (More like register to your website, but you get the idea)
So, to avoid confusion, socialite is the plugin you are looking for, Passport and Sanctum ARE NOT MEANT FOR THIS PURPOSE. here is how they are different
Plugin | About |
Socialite | Allows you to easily integrate the option to login to your website with a popular website's credentials |
Sanctum | The opposite of Socialite, Allows an application to authenticate users using your website as a back-end, usually useful when you create mobile apps for example. |
Passport | Same concept as Sanctum, but with OAuth2, Unless you need OAuth2, don't use this, Sanctum provides a much simpler API authentication development experience. |
Now, let us get to adding social login to our application, socialite.
This is a short one, just a quick referance to have your system optimized for both efficiency and performance
This laptop which has had a fresh Debian Bullseye installed has both an nVidia card alongside the intel card, to find out what cards your system is running, you could start with the command
sudo lspci
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 3D controller: NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)
Or you could simply get the relevant data with
lshw -c video
This is a 7th generation intel CPU, namely the “Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz”
Let us start by installing the nVidia drivers from nvidia, in my case, i installed the detection script followed by nvidia-drivers package from the non-free repositories
If your CPU is post 2007, Make sure you do not install “xserver-xorg-video-intel”, if it is already installed remove it ! we want xserver-xorg-core to manage the intel graphics
Install nvidia primus
apt-get install primus
Once that is setup, to figure out which card is being used as the main one, you can run
glxinfo|egrep "OpenGL vendor|OpenGL renderer"
To execute the command with the nvidia card and check if it is being used, execute it in the following way
primusrun glxinfo|egrep "OpenGL vendor|OpenGL renderer"
Now, without the primusrun you should get intel, with the primusrun, you should see nvidia
Now, you know the system is taking care of what is running which application correctly, I will follow with more information once i have the time.
To instruct the system to use your nvidia card to play a video, you can execute something like
primusrun totem
(this will force it to use your nvidia card)
Another command that will show you the utilization of your nvidia GPU (Mine is mostly 0 percent as I am not running anything with a primusrun prefix)
nvidia-smi
A short while back, I was handed a repository with code written in Laravel, incomplete, and somewhat sketchy. with the purpose of taking a look at the code and deciding whether i would take it or not.
To give you the lowdown FIRST, while first researching Laravel, I started by investigating the limitations, my first google search sent me in the direction of a blog post by Beau Beauchamp, a developer who seems familiar with the framework.
I obviously didn’t take his word for it, I don’t really know who he is, so i gave myself a 2 day intensive Laravel course, unfortunately he was somewhat right.
The following two paragraphs are from his blog post, they don’t tell you much, but are better explained as I go
Laravel prides itself as the framework for “artisans”. The impression is that Laravel is the framework for people who don’t really know how to code and don’t want to learn. I get it.
Laravel is not PHP, per se, it uses an “expressive” syntax or what has been coined as “syntactic sugar” to hide things from you that it thinks “artisans” don’t need to worry about.
Having no experience with Laravel, and plenty of experience in PHP, the 2 day course i mentioned earlier left me with the following impressions, I was Impressed by how massive it is (Implementing plenty of features with very few lines), impressed by how simple and easy it is (Truly made for people who don’t want to learn programming). and thinking that this is basically a great framework for a simple straight forward website, but once you are looking to give the website more edge, a competitive advantage, or creating complex functionality, the framework is pretty restrictive and not so scalable.
yes, caching can help with the scalability part, but the degree caching helps with depends on the nature of the website, and for this particular purpose, it is not a perfect solution.
So should we throw the existing code away ?
My answer is NO, if i do end up taking this job, I plan to launch with the Laravel code, then extend the software with good old plain old PHP, with the database acting as the API between the new system and the old system, this way, the website owner can have a functional website where he can promote and advertise, and dip his toes in the water while a different system slowly takes this system’s place as it gets developed.
After updating the existing code from Laravel 7 to Laravel 9 (Overhead), running a security audit for the code, A varnish or nginx proxy should sit in the middle, and new code should run transparently through the proxy, when that happens, I am not even restricted to the same virtual machine running Laravel, I can have 2 virtual machines running different tools acting as one website, transparently, without the user ever knowing.
The other issue I have with this project is with react and react-native, which are the front ends of both web and mobile applications.
React is a very cool framework, but there is quite a bit of controversy around it and around Ajax in general when it comes to Search Engine optimization (SEO), in a statement by google ten years ago, googlebot is now able to read a website the same way a web browser does, and I have seen that they do see it that way from ten years ago when they were providing tools telling people what pages on their websites were having horizontal scroll bars, but regardless of that statement, the fact that most websites that appear in my search results are not Ajax, and that HTML and CSS still run most of the popular websites does raise some concerns, entering into a very competitive market dictates that every inch of a competitive advantage is vital to our success.
So, first let me get the advantages and disadvantages of Laravel out of the way, then get into the technicalities and how the new system should co-exist with Laravel and react.
Pros
Here is my problem, I have a website, and in a directory in that website, I have a wordpress installation, and that installation opens correctly, loads all the images, css, js and any other files for a proper experience, only problem is, when you put this behind a varnish reverse proxy, and an nginx reverse proxy for SSL (https), the website design (theme) does not load, you only see the actual html page that was loaded, but all other elements are never fetched from the server, i actually sniffed the data and found that css, javascript, and images are never even requested !
So the short of this story, if you are having problems with page loading without the theme or design, and you have a similar setup, odds are the problem is with wordpress settings not with nginx or with varnish !
So a closer look at the page source reveals that the page was loaded with https, but the links to all the page resources are in HTTP ! why is that ? simple
when you open the website in SSL, your browser creates a secure connection with nginx (termination), nginx requests the page from varnish, which relays the page again to the server.
As far as the web server serving wordpress is concerned, this request came in http, not https ! so all the page resources should be in http right ? yes, this is what is happening, but what is the solution
I tried a few solutions, for example, i changed the wordpress address and site address to httpS, but wordpress is smart enough to use whatever protocol the user accessed and use it for all resources !
there are many solutions programmatically, which is something i avoid because i update wordpress, and don’t want to fix it every time i upgrade, so whatever solution i need has got to be in the only file that is never modified when upgrading wordpress, the config file
wordpress knows it is on SSL from the following two entries in the environment, the entries $_SERVER[‘HTTPS’] and $_SERVER[‘SERVER_PORT’], the proxy sends a hint that the user used https with the variable $_SERVER[‘HTTP_X_FORWARDED_PROTO’] in the request header, hence, adding the following code snippet somewhere in the beginning of the config file should deceive wordpress into thinking it has been accessed over https !
if($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'){ $_SERVER['HTTPS'] = 'on'; $_SERVER['SERVER_PORT'] = 443; }
Hope this all works okay for you, if not please let me know in the comments and i would be more than glad to help
My Seagate ST3000DM001 failed me, it was no longer detected by the BIOS, when the PC starts, you can feel the disk spinning, and the head moving in the usual way, but after spending a minute waiting for it to be detected on the post screen, the computer simply ignores it (Does not detect it) and boots without it. meaning the operating system does not detect it !
Before i blame Seagate (I prefer western digital in general), this drives is more than 5 years old, was only used for storage, and never ran any software, but still, more than 5 years old.
the most likely cause seems to be the PCB board as the BIOS does not detect the disk altogether, nonetheless, I have had excellent results with the freezer trick before (Even though the freezer trick is not something that is suitable for this type of malfunction, the freezer helps with mechanical issues often denoted by unhealthy sounds coming from the drive). so I froze it (Within a bag to avoid condensation), and tried it a day later, but absolutely nothing was different, no surprise there.
I also, for no reason whatsoever, removed the LID and took a look inside, no idea what i was expecting to find, but i did anyways, everything looks normal inside, and hopefully no significant dust went in there.
So i decided it was most likely the board, considered this my diagnosis, and will now act accordingly
Before looking online for a board, I took a look at the drives I had at home, turned out I do not have two of the same drive, but i do have a 2TB ST2000DM001 which had the exact board (100687658 Rev: C) ! obviously, the BIOS on the board is different between the two boards, so that has to be flipped from one board to the other (Basic soldering skills required), but otherwise, the boards are identical between the 2TB and the 3TB, I might end up losing both in this operation, but getting the data out is certainly worth the try
To begin with, I started by finding a similarly sized hard drive to copy the data that resides on the donor disks before i take its board out, luckily, I found a western digital green drive of identical size and sector size, namely a (Western Digital Green WD20EARX), this third disk is to make sure i don’t lose any data from the 2TB donor drive, so here is how it is done
After connecting both disks to a Linux PC, I identified which disk is which using the fdisk command
fdisk -l
Now that i know which one is the source and which the destination, I started the process of copying the data from the donor (2TB), to third disk (The western digital).
by moving the data on the 2TB drive (The healthy one) to a similarly sized drive , this copy procedure is the simplest task, with both connected to a Linux machine I used my favorite cloning tool (Nop, not DD, I switched to PV the moment i first tried it).
pv < /dev/sdd > /dev/sdf
Now, All i can do is wait for 4:30 hours (According to PV), then come back, take the drives out, and start the surgery. it is copying at 115MB/s probably because the WD is a green drive that uses SMR recording.
Now that it is done copying, I took out the boards (Few screws), de-soldered the bios chip as you can see in the video, and soldered the one from the 3TB board onto the donor board and the one from the 2TB board onto the presumably malfunctioning board
the disk BIOS chip is the one branded winbond and has 8 pins (usually the only chip with 8 pins).
out of curiosity, I connected the 2TB drive (Now With the bad board after the swap), and it worked ! this is defiantly bad news, the problem was not the board after all ! connecting the 3TB disk yielded the same old problem exactly !