Sublime Text: Add the ability to get a file’s relative project path to the Command Palette

Often enough when I’m debugging an issue that doesn’t occur the same in the staging or production environment as it does on a development or local environment, like most developers I add debug statements on those environments to catch/view output from various function calls.

Navigating the files in a complex project eats up significant time; since you’ve likely got the file that needs debug statements sitting open locally in Sublime Text, here’s a little Command Palette addition to copy the relative path (to project root) of the current file to your clipboard:

Step #1: Create the Plugin

  1. In Sublime Text, go to ToolsDeveloper > New Plugin…
  2. Replace the default code with the contents as shown below.
  3. Save the plugin with a name like copy_relative_path.py in your Packages/User directory.
import sublime
import sublime_plugin
import os

class CopyRelativeFilePathCommand(sublime_plugin.TextCommand):
    def run(self, edit):
        window = self.view.window()
        file_path = self.view.file_name()
        project_data = window.project_data()

        if not file_path or not project_data:
            sublime.status_message("No file or project data available.")
            return

        folders = window.folders()
        for folder in folders:
            if file_path.startswith(folder):
                relative_path = os.path.relpath(file_path, folder)
                sublime.set_clipboard(relative_path)
                sublime.status_message(f"Copied: {relative_path}")
                return

        sublime.status_message("File not in project folder.")

Step #2: Add Plugin to Command Palette

  1. Open your User package folder In Sublime Text, go to PreferencesBrowse Packages… and open the User folder.
  2. Create a new file named copy_relative_path.sublime-commandswith the contents as shown below.
    • "caption" is what will appear in the Command Palette.
    • "command" must match the name of the command class you defined in your plugin (copy_relative_file_path).
  3. Save the file and restart Sublime Text (or just use the Command Palette).
[
  {
    "caption": "Copy Relative File Path to Project Root",
    "command": "copy_relative_file_path"
  }
]

Fixing AWS Elastic Beanstalk’s “could not parse environment variables” error for /opt/elasticbeanstalk/bin/get-config

When deploying Laravel and WordPress to AWS Elastic Beanstalk, creating an .env file or just making use of environment variables is a common practice; however, Elastic Beanstalk tends to choke on string values of “null” and often returns the error message could not parse environment variables.

To resolve this issue, run the following command:

/opt/elasticbeanstalk/bin/get-config optionsettings

Look for any string values that seem like they would cause an issue, but especially string values that look like “null”; if found, run using your eb CLI command (assuming REDIS_PASSWORD is at fault):

eb setenv REDIS_PASSWORD=''

Reference:

Creating another MySQL superuser on Amazon Aurora or Amazon RDS

Normally, creating a second (or third, fourth…) MySQL superuser who has all available database privileges is as simple as executing:

GRANT ALL PRIVILEGES ON *.* TO 'newuser'@'%' IDENTIFIED BY 'password';

However, MySQL database in Amazon RDS and Amazon Aurora only allow one superuser to exist; however, you can neatly replicate the effect of having a superuser (the same full set of privileges) by modifying your SQL query to the following:

GRANT EXECUTE, PROCESS, SELECT, SHOW DATABASES, SHOW VIEW, ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, DELETE, CREATE VIEW, INDEX, EVENT, DROP, TRIGGER, REFERENCES, INSERT, CREATE USER, UPDATE, RELOAD, LOCK TABLES, REPLICATION SLAVE, REPLICATION CLIENT, CREATE TEMPORARY TABLES ON *.* TO 'newuser'@'%' WITH GRANT OPTION;

References:

Google Workspace Group Email Delegates: Sending on behalf of a Group mailing list

On Google Workspace, to allow a User to send an email on behalf of a group (like allowing Sully Syed to send an email from [email protected]), do the following:

  1. Go to Gmail on the web ( https://mail.google.com/mail/u/0/#inbox ) and log in.

  2. Click the gear icon in the upper-right and then click See all settings.

  3. Click Accounts and in the Send Mail As section, click Add another email address.

  4. Enter the name “yllus.com Information” and email address [email protected], leave Treat as an alias checked, and click Next step.

  5. Click the Send Verification button to receive an email confirmation. This message will go to the Group and thus your inbox.

  6. Click the verification link, and click the Confirm button when prompted.

  7. With that all done, do a hard refresh (F5 / Ctrl-R) to reload Gmail. Then start composing an email; you’ll notice you can select either your Gmail account or your Google Group email address as the sender in the From field. This also works when replying to incoming emails.

Source: Sending from a Google Group address in Gmail – Technology Help

Use Amazon S3 and CloudFront to create a proper HTTP and HTTPS redirect for a domain name

Fairly often we purchase a new domain or decommission an existing one and need to redirect all queries to it to a specific URL on a wholly different domain and website. Practically all services where you can register domain like GoDaddy, name.com and Namecheap offer a redirect function, but nearly all fail to work if the user attempts to go to a HTTPS URL on the domain you are redirecting.

If you’re already making use of Amazon AWS Cloud, a very low cost and foolproof solution is to utilize Amazon S3’s static website hosting feature along with Amazon CloudFront to handle the redirect:

  1. Create a new Amazon S3 bucket (eg. domain-redirect-olddomaincom) with ACLs disabled, the checkbox to block all public access unchecked (as we wish to allow public access) and bucket versioning disabled; create the bucket.
  2. Edit the newly created S3 bucket; under the Properties tab, edit the Static Website Hosting properties.
  3. In the Static Website Hosting properties page:
    • Static website hosting: Enable
    • Hosting type: Host a static website
    • Index document: index.html
    • Redirection rules: Copy/paste the following, with the appropriate changes made to the values of HostName and ReplaceKeyPrefixWith:
      [
          {
              "Redirect": {
                  "HostName": "www.newdomain.com",
                  "HttpRedirectCode": "301",
                  "Protocol": "https",
                  "ReplaceKeyPrefixWith": "path/to/redirect/to/"
              }
          }
      ]
  4. Hit the Save Changes button to finalize the redirect using the S3 bucket; it should work immediately and allow you to test it via the Bucket Website Endpoint URL shown on the page.
  5. Create a new Amazon CloudFront distribution, making sure to:
    1. Create Origin: Ensure that it points to the static website endpoint of your new S3 bucket (eg. domain-redirect-olddomaincom.s3-website.ca-central-1.amazonaws.com, not the default of domain-redirect-olddomaincom.s3.ca-central-1.amazonaws.com)
    2. Alternate domain name (CNAME): Enter all versions of the domain you wish to be handled and redirected (eg. olddomain.com and www.olddomain.com)
    3. Custom SSL certificate: Request a certificate that handles the domains you entered above; this is the (free) option that allows your newly set up domain redirect to respond properly to both HTTP and HTTPS requests
  6. Add the Amazon CloudFront domain (eg. d1gyvh82u10kd6.cloudfront.net) to your domain name’s DNS records for the root and subdomains you wish to redirect.

You’re done! While a bit long-winded as a process, this appears to be the cheapest and most foolproof way to set up a domain redirect for the long term.

 

WP Offload Media Lite: Offloading all existing media using an SQL query

Assuming you’ve already uploaded all of the contents of the uploads/ folder into the Amazon S3 bucket (set to public), you now need to run the following SQL query to add rows to the wp_as3cf_items table:

INSERT IGNORE INTO wp_as3cf_items (provider, region, bucket, path, original_path, is_private, source_type, source_id, source_path, original_source_path, extra_info, originator, is_verified) 
SELECT 
	'aws', 
	'AWS_REGION_HERE', 
	'AWS_BUCKET_NAME_HERE', 
	CONCAT('wp-content/uploads/', SUBSTRING_INDEX(guid, 'wp-content/uploads/', -1) ) AS path, 
	CONCAT('wp-content/uploads/', SUBSTRING_INDEX(guid, 'wp-content/uploads/', -1) ) AS original_path, 
	0, 
	'media-library', 
	id as source_id, 
	SUBSTRING_INDEX(guid, 'wp-content/uploads/', -1) AS source_path, 
	SUBSTRING_INDEX(guid, 'wp-content/uploads/', -1) AS original_source_path, 
	'a:2:{s:13:"private_sizes";a:0:{}s:14:"private_prefix";s:0:"";}', 
	0, 
	1 
FROM `wp_posts` 
WHERE `post_type` = 'attachment';

Make sure to replace AWS_REGION_HERE and AWS_BUCKET_NAME_HERE above; you may also need to adjust the CONCAT for path and original_path if your bucket has a complex folder structure.

Source:

Get your MySQL table sizes in MB via a SQL query

Figuring out what tables are making your database excessively large can be really helpful when working with production database exports and keeping various environments up to date. This SQL query outputs that information for all databases on a given server:

SELECT
	table_schema AS `Database`,
	table_name AS `Table`,
	ROUND(((data_length + index_length) / 1024 / 1024), 2) AS `Size in MB`
FROM information_schema.TABLES
ORDER BY table_schema, (data_length + index_length) DESC;

Reference:

Generate MySQL INSERT statements for a few existing records

Occasionally, you’ll want to duplicate a couple of records from your production MySQL database for local use without dumping the entire table (which could be huge). As a solution, the mysqldump command allows you to specify a table and a WHERE query for that table, allowing you to select specific record ID #s and retrieve only those records as INSERT statements:

mysqldump -h {host} -P {port} -u {username} -p{password} {database_name} {table_name} --where="ID = 1" --no-create-info --no-create-db

Reference:

Import Environics demographics data into a SQL database via Laravel CLI command

Environics Analytics provides incredibly useful demographics, psychographics and segmentation data for Canada, but the raw data is challenging if the goal is to use it in a general SQL query: The raw data is in one file, the meanings of the columns are in a ‘metadata’ file, and the data provided doesn’t necessarily just list postal codes but other geographic areas as well.

Linked below is a Laravel CLI command that outputs a .SQL file that will handle the creation and insertion of data and definitions for you. The CLI command is invoked as follows:

php artisan environics_demographics:import {csv_demographics_definitions} {csv_demographics_data} {year_demographics_data} {dir_output_data}

Find the full script on GitHub Gist, or below:

Merge a PHP array without duplicates (array_merge_recursive_distinct)

Often enough in PHP, you’ll grab objects from a variety of sources and want to merge them into a single array of results. To merge without duplicates, add the following function to your codebase and make use of array_merge_recursive_distinct the same way you would array_merge_recursive:

// From: https://www.php.net/manual/en/function.array-merge-recursive.php#92195
if (! function_exists('array_merge_recursive_distinct')) {
    /**
     * @param array<int|string, mixed> $array1
     * @param array<int|string, mixed> $array2
     *
     * @return array<int|string, mixed>
     */
    function array_merge_recursive_distinct(array &$array1, array &$array2): array
    {
        $merged = $array1;
        foreach ($array2 as $key => &$value) {
            if (is_array($value) && isset($merged[$key]) && is_array($merged[$key])) {
                $merged[$key] = array_merge_recursive_distinct($merged[$key], $value);
            } else {
                $merged[$key] = $value;
            }
        }

        return $merged;
    }
}

Reference: