Welcome

Apologia, decisions, & consequences

Pages

Recent Posts

Categories

Blogroll

Books

Data

Gaming

Tools

Archives

Tags

Looping through filenames in Bash

January 21st, 2024 by L'ecrivain

To loop through the subdirectories in a folder, when those subdirectories may contain spaces in the file names, use the following procedure.

OLD_IFS=$IFS && IFS=$'\n'
 for directory in $HOME/somefolder/*/; do
 echo “some code here”
 done
 IFS=$OLD_IFS

 

Posted in Computing Notes | Tagged: ,

Bash and *nix Note no. 2

January 10th, 2024 by L'ecrivain

 

This document contains some notes on .bashrc for use with Linux systems.

alias ll=’ls -alF’
alias la=’ls -A’
alias l=’ls –CF’

if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi

The preceding two blocks come standard on Ubuntu as of version 2022.04. One tweak for WSL2 on Windows might be the addition of this line.

sudo bash “/etc/rc.local”

and in the rc.local follow, add the following.

#!/bin/bash
rm -f /etc/resolv.conf
echo “nameserver 1.1.1.1″ >> /etc/resolv.conf
echo “nameserver 8.8.8.8″ >> /etc/resolv.conf

This is because WSL2 may place non-routable IP addresses using 172… in resolv.conf. As of this writing, following steps recommended to disable the generation of the file fails to resolve the issue where the file is newly generated with every new terminal launched in Windows terminal. Having this run with each new Bash login ensures success with network communications.  The location /etc/rc.local may seem like an odd location for this.

The crontab to ensure to check each minute to ensure a virtual machine is running in VirtualBox is as follows.

*/1 * * * * VBoxManage startvm “VMNAME” –type headless
*/1 * * * * VBoxManage startvm “VMNAME2″ –type headless

If the virtual machine is already running, then it will not start a new copy of it. This is better than attempting to run the script via a system wide script after a reboot.  Running it this way allows a simple crontab for the user for whom the machine needs to run under. To run a script every five minutes, add the crontab as follows:

*/5 * * * * /home/USERNAME/nextcloudcron

In this example, the nextcloudcron script will run every five minutes.  This particular script is one for use contacting my Nextcloud instance for webcron.  It does not contain a .sh on the filename, because some implementations may disallow crontab scripts with file extensions.

The following checks inside a subdirectory and pulls in the files therein as bash sources. This is useful for breaking aliases, variables, and other items into different files.

if [ -d ~/.bashrc.d ]; then
for rc in ~/.bashrc.d/*; do
if [ -f "$rc" ]; then
. “$rc”
fi
done
fi

The following uses nano as the crontab editor.

export VISUAL=nano

On Oracle Linux 9 on AWS, it is necessary to install Cronie to enable cron jobs.  To ensure this starts after reboots, add the following to /etc/rc.local.

bash /sbin/crond

To create a date line for a log file, use the following:

echo $(date) >> /home/USER/FILENAME

The Shebang for the top of bash files is

#!/usr/bin/env bash

To auto-update via DNF and leave a log of what was accomplished, use the following script.  The script will write a list of the updates to the systemupdates.log file, and then update the system with details of that process written to the dnfupdates.log file.

#!/usr/bin/env bash
echo $(date) >> /home/USER/systemupdates.log
sudo dnf check-update >> /home/USER/systemupdates.log
echo $(date) >> /home/USER/dnfupdates.log
sudo dnf update -y >> /home/USER/dnfupdates.log 2>&1

Posted in Computing Notes | Tagged: , , , ,

Bash and *nix Note no. 1

July 7th, 2023 by L'ecrivain

7 July 2023:
Ubuntu 22.04′s update notifier seems to not include the option to configure any settings, at least after certain conditions exist on a particular system.  To stop the notifier from appearing.  To stop the notifier from appearing use

sudo apt-get remove update-notifier

7 July 2023:
There are non-snap builds of ungoogled-chromium for Ubuntu

6 July 2023 and prior:

To disable automatic updates on Ubuntu:

sudo dpkg-reconfigure unattended-upgrades
sudo apt remove packagekit

In .bashrc create the following alias:

alias nano="nano -c --guidestripe 80"

This will open always Nano with line and column numbering along with a long line marker at the specified column. The long line marker feature works for Nano version higher than 4, which doesn’t include the version in the CentOS 7 repositories.

In .bashrc add the following line:

export DISPLAY=\
"`grep nameserver /etc/resolv.conf | sed 's/nameserver //'`:0"

This adds the IP address of the instance to the display variable for use with Windows Subsystem for Linux and VcXsrv.

In .nanorc add the following items. The first two lines are commented because the command line switch used in the alias for the line numbers creates no artifacts when using copy-paste.

#set linenumbers
#set constantshow

# https://www.nano-editor.org/dist/latest/nanorc.5.html
set guidestripe 80

include /usr/share/nano/autoconf.nanorc
include /usr/share/nano/patch.nanorc
include /usr/share/nano/nanorc.nanorc
include /usr/share/nano/groff.nanorc
include /usr/share/nano/awk.nanorc
include /usr/share/nano/man.nanorc
include /usr/share/nano/java.nanorc
include /usr/share/nano/sh.nanorc
include /usr/share/nano/po.nanorc
include /usr/share/nano/texinfo.nanorc
include /usr/share/nano/python.nanorc
include /usr/share/nano/perl.nanorc
include /usr/share/nano/pov.nanorc
include /usr/share/nano/ocaml.nanorc
include /usr/share/nano/tcl.nanorc
include /usr/share/nano/debian.nanorc
include /usr/share/nano/lua.nanorc
include /usr/share/nano/xml.nanorc
include /usr/share/nano/gentoo.nanorc
include /usr/share/nano/objc.nanorc
include /usr/share/nano/tex.nanorc
include /usr/share/nano/guile.nanorc
include /usr/share/nano/php.nanorc
include /usr/share/nano/c.nanorc
include /usr/share/nano/nftables.nanorc
include /usr/share/nano/spec.nanorc
include /usr/share/nano/elisp.nanorc
include /usr/share/nano/ruby.nanorc
include /usr/share/nano/go.nanorc
include /usr/share/nano/nanohelp.nanorc
include /usr/share/nano/default.nanorc
include /usr/share/nano/json.nanorc
include /usr/share/nano/css.nanorc
include /usr/share/nano/mgp.nanorc
include /usr/share/nano/asm.nanorc
include /usr/share/nano/mutt.nanorc
include /usr/share/nano/javascript.nanorc
include /usr/share/nano/postgresql.nanorc
include /usr/share/nano/rust.nanorc
include /usr/share/nano/fortran.nanorc
include /usr/share/nano/cmake.nanorc
include /usr/share/nano/makefile.nanorc
include /usr/share/nano/html.nanorc
include /usr/share/nano/changelog.nanorc

Using Tar and Gzip:

tar -zcvf output_file_name[.tar.gz or .tgz] directory_to_compress

tar -Jcvf output_file_name[.tar.xz] directory_to_compress
reads compression level for XZ from environmental variable, but note that xz utils is not installed by default on ubuntu 10.04

tar -xvf works to extract .tar.gz file created like in the above

The Tar and Gzip commands use compression settings specified in .bashrc

export GZIP=-9

export XZ_OPT=-9

Add the following to .bashrc for a console calculator:

calc() { echo "$*" | bc -l; }

For the ll alias, use the following in .bashrc

alias ll='ls -alF'

To change the terminal name use the following in .bashrc.  The command differs depending upon which version of Gnome one uses.

# For older versions of gnome-terminal
#shellrename() { read -p "Enter new shell name: " name && PROMPT_COMMAND='echo -ne "\033]0;${name}\007"'; }
# Works on OpenSUSE 15.3
#PS1=$PS1"\[\e]0;test1\a\]"
shellrename() { read -p "Enter new shell name: " name && PS1=$PS1"\[\e]0;${name}\a\]"; }

To obtain a uuid, add the following to .bashrc:

uuid() { UUID=$(cat /proc/sys/kernel/random/uuid) && echo $UUID;

Zen Burn color scheme for the terminal, but sometimes causes problems with work via SSH.

# Zen Burn
# Another old way that works great in gnome-terminal while causing problems
# in some configurations involving SSH:
echo -ne '\e]12;#BFBFBF\a'
echo -ne '\e]10;#DCDCCC\a'
echo -ne '\e]11;#3F3F3F\a'
echo -ne '\e]4;0;#3F3F3F\a'
echo -ne '\e]4;1;#705050\a'
echo -ne '\e]4;2;#60B48A\a'
echo -ne '\e]4;3;#DFAF8F\a'
echo -ne '\e]4;4;#506070\a'
echo -ne '\e]4;5;#DC8CC3\a'
echo -ne '\e]4;6;#8CD0D3\a'
echo -ne '\e]4;7;#DCDCCC\a'
echo -ne '\e]4;8;#709080\a'
echo -ne '\e]4;9;#DCA3A3\a'
echo -ne '\e]4;10;#C3BF9F\a'
echo -ne '\e]4;11;#F0DFAF\a'
echo -ne '\e]4;12;#94BFF3\a'
echo -ne '\e]4;13;#EC93D3\a'

Circa 2015: Adding Windows to CentOS 7 and Grub 2 after installing CentOS and no automatic configuration of Windows boot options. The file to manually edit is /etc/grub.d/40_custom.

menuentry "Windows 10 Professional" { set root=(hd0,1) chainloader +1 }

After editing, update Grub and reboot using the following command.

grub2-mkconfig –output=/boot/grub2/grub.cfg

The following shows one of the custom file contents circa 2018 or 2019 after some changes to installers and autodetection.

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the menu entries you want to add after this comment.
# Be careful not to change the 'exec tail' line above.
menuentry "Windows (system) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs insmod ntldr set root='(hd0,msdos1)' ntldr ($root)/bootmgr}

Posted in Computing Notes | Tagged: , , ,

Minecraft Tweaks

March 16th, 2023 by L'ecrivain

To launch

java -Xms2048M -Xmx2048M --add-modules=jdk.incubator.vector -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4 -XX:InitiatingHeapOccupancyPercent=15 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 -Dusing.aikars.flags=https://mcflags.emc.gs -Daikars.new.flags=true -XX:G1NewSizePercent=30 -XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M -XX:G1ReservePercent=20 -jar server-1.21.1.jar nogui

Posted in Computing Notes | Tagged: , ,

Which place to live, no. 1

February 13th, 2022 by L'ecrivain

The U.S. Department of Labor shows the states with mandatory lunch breaks and the states with mandatory rest periods.  The national Conference of State Legislatures has a list of which states codified religious and personal belief vaccine exceptions.

This article is to narrow down long a long term settlement plan based upon a critical factor, probably the most important factor.  Some might not think this factor is that important, but since it is encountered day in and day out every working day of every year, it is extremely important.  The factor is whether or not a worker gets breaks at work.  Numerous states do not mandate any kind of break.  Others mandate a lunch period, and others mandate lunch and rest periods.  Some mandate lunch periods only for those under 18.  Ostensibly, the day after one’s 18th birthday is the point at which one no longer needs rest during a workday for those states.

I have worked in Kentucky and Tennessee.  Both of these states have different regulations. Tennessee requires a lunch break, but does not specify rest periods.  Kentucky specifies lunch and rest periods.  In the state of Kentucky, every employer I ever worked for except for one complied with all the provisions.  A single employer hated the idea of rest periods, but such behavior was illegal and that gives people the right to argue against them.  In Tennessee, which has a slightly different framework, I have had had 2 employers which had both lunch and rest breaks, 1 employer that had only a 30-minute lunch break in the center of a ten hour shift, and one employer that had none until complained on severely, and then the complainers got the mandatory lunch break while the non-complainers still received no breaks.

New Hampshire was on my short list, but it mandates only lunch breaks like Tennessee.  That means a distribution center with a senior walking 10 miles a day would have no rest periods and that would be legal and they would have no argument against that treatment.

With n=4, lunch and break stands at 50% for a state that regulates only a lunch period, and greater > 95% in a state that regulates both lunch and breaks.  Part of that may not even relate to any kind of enforcement, but the simple cultural difference that workers should be able to rest for a few minutes during the day and the people growing up under such a culture just expect that it will happen for others.

Illinois, Kentucky, Colorado, and Vermont pass the anti-slavish test.  California also regulates lunch and rest periods, but they and New York have chosen the path of gene-therapy anti-humanism and cannot be considered among the moral states that we consider further.

Of those 4 states continuing, Vermont’s regulation about rest periods is a vague standard of reasonable to protect health and hygiene.  The other three specify minimum time amounts.  So any person who will have to work a full time job in their senior years should live in one of those three, possibly four states.  Frankly, anyone who has to work a full time job at all should live in one of those states.  Maybe a further look at the pragmatic realities related to Vermont’s non-specific standard would make it a quartet of states with respect of workers.

A non passive-incomer’s state of choice to settle in would then be either Illinois, Kentucky, or Colorado, so that they may have some rest breaks during the thousands of days they spend working over the course of their lives.  It’s illegal to work animals with no breaks in the same states you can do it to humans if you are a monied interest.

Posted in Social Science notes | Tagged: , , , , ,

T+n time series analysis

December 10th, 2021 by L'ecrivain

This isn’t stock advice, it’s a data and math project. AAPL is interesting (not investing advice!) from a data perspective. Part of my big data project was the creation of a tool to analyze every symbol regressed on every other symbol for numerous lags. In addition to revealing the vendor’s mangling of data for unknown purposes, that analysis of lags produced at least one independent variable upon which the regression of future AAPL produced a .3 R-squared value. That is a weak or low effect size, but was a genuine discovery which passed hypothesis testing across time horizons. That was Direxion Daily S&P Oil & Gas Exp. & Prod. Bear 2X Shares (DRIP). DRIP was the first independent variable I discovered through the lag analysis. The problem is that the lag analysis is very computational intensive. Then, after doing that, ferreting out the red herrings because of bogus data takes another large amount of time. It seems that my data provider inserts bogus data producing .9 r^2 values between different vectors. These are of course problems that can one can mitigate with better code. That too, takes time. This isn’t a document about that first statistically significant predictor for AAPLt+3.

Keeping the t+ and t- in mind causes some difficulty for me. T+2 means today plus 2. If today is January 5, t+2 is January 7. If we are analyzing t+3, that is January 8. Programming in 0-indexed programming languages produced an inner impulse to count from 0 instead of 1. With zero indexing, the third option is 7 as shown in this example of a list [5, 6, 7, 8]. This is a heavily ingrained impulse that I must both use and mitigate. This project used Python and R and shell scripting.

The first 1 factor model devised using DRIP was as follows:

-2.065(DRIP Typical %change) -.0040 = AAPL Typical %changet+3

This model was tested in Stata using the data in manipulated in Python and R. This formula may be of no use in the future or even now. I might revisit this in discussing lags portion of the project later

Posted in Computing Notes | Tagged: , ,

Stata reference material

October 11th, 2021 by L'ecrivain

Linear regression analysis using Stata contains a look at Stata output for regression. There is also an excellent document on Regression Analysis | Stata annotated output at this link. There is an excellent document on Basic Data Analysis and Manipulation at this link. Yt−1 is the first lag of Yt (Chapter 10, p. 1).

Posted in Computing Notes | Tagged:

Rudimentary versioning system

October 19th, 2020 by L'ecrivain

Some time ago, Python 2 was the default language for use with Linux and Gnome 3. A set of extensions for Gnome, called Nautilus Python existed which allowed one to create customized right click menus. One of these was called “Historical Copy” and it created a lovely copy of the file with a timestamp inserted into the file name. The timestamp was constructed to allow the files to appear easily sorted when perusing directories. Software rot affects all software and especially software that requires Python 2 libraries that maintainers no longer ship on new versions of Linux. To counter this problem, we have a rudimentary versioning system which adheres to the Keep it Sweetly Simple (KISS) principle.

The following code is appended to the .bashrc file in the home directory.

### Historicaly copy rudimentary versioning system
### Saves a historical copy and a note about that copy
### Compressed to 68 character width for website display purposes

historicalcopy() { mkdir -p local-historical-versions &&\
 timestamp=$(date +"%y.%m.%d.%H%M") && read -p\
 "Enter filename:" n && read -p "Enter note: " note &&\
 filename=$(pwd)/$n && file_name=$(basename $filename) &&\
 left_side="${file_name%.*}" && extension="${file_name##*.}" &&\
 cp $filename\
 local-historical-versions/${left_side}-$timestamp.$extension &&\
 cp $filename\
 /var/www/html/hv/archives/${left_side}-$timestamp.$extension &&\
 sed -i "11i${left_side}-${timestamp}.${extension} ${note}"\
 /var/www/html/hv/index.php; }

From the directory with the file to be versioned, the command historicalcopy is typed.  This creates a directory in the current directory called local-historical-versions and copies the historical version into that directory.  It then copies the file to a complete historical versions archive and appends a PHP file in the web-server directory with both a link and comment.  The reason periods are used instead of dashes is because experience demonstrated that my software using Python had difficulty with filenames incorporating dashes.  This naming style is similar to the RPM naming convention which uses name-version-release.  This rudimentary versioning system uses a timestamp as the version number.  There are plenty of more advanced systems such as git, but sometimes we can work more efficiently with a simple direct historical list of what file did what way back when.  This could easily be changed so that an html file is updated in a local directory instead of a PHP file on a web-server.  The PHP file is future usage incorporated with user authentication and a long term code repository.  The code can then be used on OSX and Windows Subsystem Linux by pulling the PHP to the local machine, inserting the necessary file, and then transferring it back to the web-server via SSH.  In a way that seems like Git, but this is for the use case where one wants a simple to use list.

A typical workflow goes something like this.  Open up a terminal and navigate to the directory containing a heavily evolved r script.  Use the historicalcopy command on that script with a note something like “prior to adding the new data frame for time series data on Kentucky unemployment”.  Then open that file in the editor of choice to work on it.  It is very useful for Python programs where huge changes can take place which require significant removal of existing code.  This is the case for one of my projects which has been an ongoing project involving thousands and thousands of lines of code that has evolved over four years.  This simple scheme lets me remember which files had the important code that I still want to use in the future. The flat file format and easy naming convention allows easy migration, backups, and reduces the learning curve.

image

Posted in Computing Notes | Tagged: ,

Persistent Notification in Gnome 3

August 15th, 2020 by L'ecrivain

This is a GTK notification in Linux that remains on screen until it is clicked.

#!/usr/bin/python3

import gi
gi.require_version('Notify', '0.7')
from gi.repository import Notify
gi.require_version('Gtk', '3.0')

Notify.init("Hello world")
Hello = Notify.Notification.new("Hello world - the heading",
                                "This is an example notification.",
                                "dialog-information")

# https://www.devdungeon.com/content/desktop-notifications-linux-python
Hello.set_urgency(2) # Highest priority
Hello.show()

Last updated and tested 12 December 2020 on CentOS 8 (not Stream).

Posted in Computing Notes | Tagged: ,

Labeling variables in R

February 2nd, 2020 by L'ecrivain

This great procedure makes it easy to remember what variables are related to in R. One of the troubles with exploratory data analysis is that when one has a lot of variables it can be confusing what the variable was created for originally.  Certainly code comments can help but that makes the files larger and unwieldy in some cases.  One solution for that is to add comment fields to the objects created so that we can query the object and see a description.  So, for example, we could create a time series called sales_ts, and then create a window of that, called sales_ts_window_a, and another called sales_ts_window_b, and so on for several unique spans of time.  As we move through the project we could have created numerous other variables and subsets of those variables.   We can see the details of those by using head() or tail(), but that may not be an extremely useful and clear measure.

To that end, these code segments allow applying a descriptive comment to an item and then querying that comment later via a describe command.

example_object <- "I appreciate r-cran."
# This adds a describe attribute/field to objects that can be queried.
# Could also change to some other attribute/Field other than help.
describe <- function(obj) attr(obj, "help")
# to use it, take the object and modify the "help" attribute/field.  
attr(example_object, "help") <- "This is an example comment field."
describe(example_object)

The above example refers to an example object, that could easily be sales_ts_window_a mentioned above.  So we would use the attribute command to apply our description to sales_ts_window_a.

attr(sales_ts_window_a, "help") <- "Sales for the three quarters Jan was manager"
attr(sales_ts_window_b, "help") <- "Sales for the five quarters Bob was manager"

After hours or days have passed and there are many more variables under investigation, a simple query reveals the comment.

describe(sales_ts_window_a)
[1] "Sales for the three quarters Jan was manager"

This might seem burdensome, but RStudio makes it very easy to add this via code snippets. We can create two code snippets. The first is the one that goes at the top of the file which defines the describe function that we use to read the field we apply to the comment to. Open RStudio Settings > Code > Code Snippets and add the following code. RStudio requires tabs to indent these.

snippet lblMaker
        #
        # Code and Example for Providing Descriptive Comments about Objects
        # 
        example_object <- "I appreciate r-cran."
        # This adds a describe attribute/field to objects that can be queried.
        # Could also change to some other attribute/Field other than help.
        describe <- function(obj) attr(obj, "help")
        # to use it, take the object and modify the "help" attribute/field.  
        attr(example_object, "help") <- "This is an example comment field."
        describe(example_object)

snippet lblThis
        attr(ObjectName, "help") <- "Replace this text with comment"

Now one can use the code completion to add the label maker to the top of the script. Simply start typing lblMak and hit the tab key to complete the code snippet. When wanting to label an object for future examination, start typing lblTh and hit tab to complete it and replace the objectname with the variable name and replace the string on the right with the comment. These code snippets provide a valuable way to store descriptive information about variables as they are created and set aside with potential future use.

This functionality does overlap with the built in comment functionality with a bit of a twist. The description added via this method appears at the end of the print output when typing the variable name. The built in comment function does not print out. It is also less intuitive than describe() and receiving a description.

R contains a built in describe command, but it often is not useful. Summary is the one I use most often. For a good description, I import the psych package and use psych::describe(data). Because of that, the describe method in this article is very useful. The printout appears like below with the [1]…

lu71802xbt90_tmp_dac5c795

Adding attributes other than “help” could easily be accomplished. DescribeAuthor, DescribeLocation, and other functions could be added. When using a console to program, a conversational style makes it flow better.

Posted in Computing Notes | Tagged:

My Favorite Function

May 25th, 2019 by L'ecrivain

My favorite function of all time is varsoc in Stata.  That’s saying a lot because I have been working with computers for decades and have written software in several languages, used many different types of administrative software tool sets, and owned a lot of books with code in them.  Varsoc regresses one variable, y, upon another variable, x, and then regresses each lag of y on x to produce output that allows one to know the best fit lag for a regression model.   It allows someone analyzing time series data to immediately know that data from the several prior is a better predictor of today’s reality than more recent data.  I adore Stata for scientific analysis.  In order to use this for my big data project, I needed to automate it, and so I wrote an R vignette that would analyze 45 lags and produce the relevant test statistics. My vignette produces r2 values1, parameter estimates, and f-statistics for 45 lags of y regressed on x. The p-values are then written to a CSV file. The decision rule for a p-value is that we reject the null hypothesis if the p-value is less than or equal to α/2.2 The data comes from 5GB of CSV files that were created via Python.

Running the lags shows us the relationships between the historical prices of two securities. When we regress y on x in this case, we are regressing the price of security 2 on security 1. We then do this on a lag. The L1 of security 2 regressed on security 1’s L0. Then we regress L2 of security 2 on security 1’s L0. This occurs for 45 iterations. For example, we might find that the price of a gold ETF 44 days ago has the best relationship with the price of Apple stock today as compared to the price of that same gold ETF 12 days ago and even today. That’s an example only and not anything substantiated in the data. There will certainly be some spurious relationships. An ETF buying shares of Apple and then the same ETF’s fee going up the next month, for example. To mitigate this, the vignette uses the first difference of the logarithm so that the data is stationary. The CSVs are already produced so that unit roots are accounted for. This is a research project to identify what actually bodes well in other sectors. It runs on every listed security on the American exchanges. Every symbol is regressed on Apple. Every symbol is regressed on Microsoft, and so on. The data is stationary and unit roots are eliminated.

I initially began this project some time ago and at that time I stopped because it was going to take a solid month of continuous 12-core processing to accomplish the entire series. In retrospect, I should have let that proceed but there would have been a great tradeoff in that I couldn’t have played Roblox, The Isle, and Ark Survival Evolved with my daughter. Finally, I’ve got the research running on a new machine dedicated to that purpose. That machine uses an AMD Ryzen 5 3500 and NVMe SSD. The program is running on 6 cores in parallel. Previously, with the one month estimate, it was running concurrently on 12-cores of Westmere Xeon CPUs and storing the output in RAM instead of on an SSD. This will serve as an interesting test for the Ryzen since all six cores will be running at 100% for months on end. The operating system is OpenSuse Leap 15.2, the R version is 4.05, and the Python version is 2.7.18.

One of the reasons to write these articles is for my own memory. It gets older to remember as one gets older. These blog posts are essentially a public notebook to aid myself and others.


1  R2 is the coefficient of determination, which is the square of the Pearson correlation coefficient, r, the formula for which is ρ=β1(σx/σy), where β1 is the parameter estimate. ASCI and Unicode text does not have a circumflex, ^, on top of the β. For this documentation the objective is multiplatform long-term readability so an equation editor with specialized support for circumflexes is out of the question.

2  There is also the existence of the rejection region method. We reject the null hypothesis if the test statistic’s absolute value is greater than the critical value, which we can express with the formula Reject if |t| > tα/2,n-1

Posted in Computing Notes | Tagged: , , ,

Risk, religion, and temping

March 25th, 2019 by L'ecrivain

“How The Masses Deal With Risk (And Why They Remain Poor)” appeared on Capitalist Exploits in January of 2016. The quote that resonated the most was “What is also a fact is that the mean return of early stage VC investments is north of 50% per annum. This is the mean and like anything else with a little bit (OK, a lot) of work, outperforming the average in anything is entirely achievable if you put effort into it.” (Chris MacIntosh, 2016)

“For Many Americans, ‘Temp’ Work Becomes a Permanent Way of Life” appeared on NBC News in April of 2014. The article follows Kelly Sibla and others who joined the ranks of the permanent no-benefit-no-FMLA class of temporary employees. The market started calling ‘temp’ jobs ‘contract’ jobs around the end of the Great Recession. “…labor economists warn that companies’ growing hunger for a workforce they can switch on and off could do permanent damage to these workers’ career trajectories and retirement plans” (Maddie McGarvey, 2014).Andrew Moran, writing for Time Doctor looked at the same issue in “Employee Extinction? The Rise of the Contract, Temp Workers in Business” using Federal Reserve data and other countries. The phenomenon is not unique to the United States, however the United States does not have a social safety net for things like housing the way that other countries do.

James Balogun wrote a career advice piece on the subject called “Here’s the Deal with Contract to Hire Positions”, and although he left out the valuable statistics about the majority never converting to full time employees, the article provides a great analysis on the scenarios when taking such a job. The best quote is “Let’s be clear here. The employee is the one taking the risk in a contract to hire, not the employer”. (Balogun, 2016)

Outcome-Based Religion by Mac Dominick describes the management theories of Peter Drucker and their penetration into organized religion in Chapter 13. It’s an interesting read and describes the mode of many denominations to act in a business manner. It details theological seminaries and Pharmaceutical company foundations working with seminaries via foundations (Eli Lilly, among others). The book mentions one “community church” that makes hundreds of referrals for psychiatric care annually. Dominick refers to this as the rise of “Christian Psychology”. It’s an interesting read, but like many other works that discuss the Roman Catholic faith, fact-checking assertions remains a good idea. One example of such claims is the assertion that Catholicism teaches that salvation exists in all faiths, but, in August 2016, Brother Andre Marie wrote an explanation detailing the misunderstandings of that view.

Dr. Ed Hindson at Liberty University wrote an article denying preterism in 2005 called The New Last Days Scoffers. Donald Perkins discusses the refutation and explains the futurism view. J. R. Bronger wrote another analysis of the preterist view in August 1999, and calledRealized Eschatology a poisonous belief. Bronger used a broad brush, but made strong arguments, including references to Hymenaeus and Philetus, historical figures who claimed the resurrection was already past. JM wrote a more recent article with strong arguments opporsed to futurism. Jame’s Loyd’s article at Christian Media Research takes issue with preterism and contains historical detail in addition to scriptural analysis while keeping Daniel’s 7 debated years in the past rather than the future.

Posted in Spiritual notes | Tagged: , , ,

Age of the earth and the race of Jesus

May 9th, 2018 by L'ecrivain

Age of the earth debates from the old-age side are based on linear regressions which are parameter estimates and arguing about whether that’s a fact or not is like arguing about whether the expected value of a portfolio is a fact or not. It’s an absurd thing to claim as truth and argue about since it is a mathematical outcome from a chosen formula.

Genetic ancestor tests DON’T ACTUALLY REVEAL ANCESTRY [1]. This one is a myth that new atheists push about.

…It’s also quite possible for someone who is African American to get ancestry test results that say they’re 75 percent European… [1]

One cannot analyze a bunch of DNA and determine where someone came from a million years ago, and applying DNA results to modern geopolitical borders is snake-oil selling. At best they are correlations only and correlation doesn’t imply causation.

The second one is a favorite of anti-Israel proponents who secretly think the Judeans in the Bible were replaced en-masse at some point in the past with people who looked differently than the modern Isrealis who got that state as a result of Judaism-following ancestors, thus proving that Jesus was ‘browner’ and did not have ‘blue eyes’ [2] because of hithertoo unknown genetic predictive power proving that he would thus side with the PLA in morality questions. King David being said to have had Red hair really puts the lie to that whole browner thing… Hence why genealogies are a waste except as box-checking messiah status.

1. https://now.tufts.edu/articles/pulling-back-curtain-dna-ancestry-tests [archive | wayback]

2. https://www.timesofisrael.com/anomalous-blue-eyed-people-came-to-israel-6500-years-ago-from-iran-dna-shows/

Posted in Spiritual notes | Tagged: , ,

STEM jobs in the United States

March 19th, 2018 by L'ecrivain

The number of science, technology, engineering, and math, STEM, jobs in the United States, shrank for the past three decades,1982-2012. The draw-down accelerated from 2000-2012.

The highest occupational growth occurred among occupations with soft skills, with K-12 teaching and non-doctor health care support staff, such as nurses, technicians, and therapists. From 2000-2012, those in the physical sciences, such as chemistry, physics, and others, biological scientists, and engineers saw decreases in the availability of work in their field. The percentage of the workforce that fell into the category of “engineer” declined by over 15% (David Deming, 2017). In “The Economics of Noncognitive Skills”, data from the Brookings Institution’s Hamilton project shows that the number of service jobs increased the most over the last three decades (Timothy Taylor, 14 October 2016). These are tasks such as customer service.

Posted in Social Science notes | Tagged:

Decision Theory Articles

April 5th, 2016 by L'ecrivain

Very good article on decision theory by James Jones, Professor of Mathematics Richland Community College.  The modes discussed are expect value (realist), also called the Bayesian principle, Maximax (optimist), Maximin (Pessimist), and Minimax (Opportunist).  They use the example of a bicycle shop choosing how many bicycles to purchase and sell.  The example is very good and the explanation is well-constructed.

Forestry Economics: A Managerial Approach by John E. Wagner has a great explanation of the decision modes.  The Wikipedia article on Minimax includes pseudocode for using Minimax in games, such as Chess.  Of particular interest was the mention of this technique’s use by Deep Blue, the computer which beat Gary Kasparov in chess.

Management and the Technology Professional – B302 Risk analysis using maximin criterion, minimax regret criterion, expected value criterion, and decision trees is a good example of decision theory writing as well, and it includes a Dilbert cartoon.  Ultimately the regret table at Wikipedia was one of the most useful.

Real-World Decision Making: An Encyclopedia of Behavioral Economics edited by Morris Altman captured my interest when search related to Laplace decision criteria.  It’s more of an economics book on behavior.  I am mentioning here not because of decision theory content, but because it’s page on Google Books led me to IndieBound.org, which seems to be a federation of independent bookstores.

Posted in Social Science notes | Tagged:

« Previous Entries Next Entries »