Tag Archive for 'paired comparision'

Zettelkasten and Writing with Joplin, BPG Fonts, Aider, Ollama, Deepseek r1 14B

This is my first attempt at weekly posts. I created an organizational schema and setup the files to begin the work. One of the things this week that I accomplished was the use of Aider to create a rapid prototype of a paired comparison analysis tool that works on the console in any operating system that uses Python. I used Ollama with Deepseek R1 14B running locally as the backend model. The code for version 25.26.12.20153 is accessible on my website.

The idea of creating an economics website with a spiritual element began to intrigue me quite some time ago. It satisfies several stipulations related to the use of my time in the future. After some experimentation, adding images via Zettlr, which is the word processor that I am using, is cumbersome. I could add them another way or in another program, but this program inspires me to write. I have finally settled on simply using Joplin because I am aging daily and have less time than in the past due to my long commute.

Part of my inspiration for this post today results from the 27 December 2025 issue of Coffee and Covid by Jeff Childers. In that issue, he details his writing and organization process. I have several hundred megabytes worth of notes in Joplin.  I migrated many notes to Obsidian, but now I want them back. With Joplin, one may right click a note and copy a markdown link to use within another note.  That procedure is less efficient than Zettlr’s ability to start typing a colon and then select the note from a list that filters the notes based on what one types.  I changed font families to the following:

Editor font family: BPG Courier GPL&GNU

Editor Monospace font family: BPG Courier S GPL&GNU

Viewer and Rich Text Editor font family: BPG Serif GPL&GNU

This allows me to see a preview of my writing in a serif font which helps me write more effectively. Joplin automatically exports a backup of all the files in a single file daily.  I need a second machine configured to export these and individual files in case something happens and the collective archive file fails.

 

Paired Comparison Analysis

This is a simple paired comparison analysis to compare a list of items amongst themselves to find a ranking for decision making.

 

#!/usr/bin/env python3
#######################################################
# Paired Comparison Analysis
# webmaster@memorymatrix.cloud
# 25.26.12.2053
#######################################################

import sys
import logging


def create_lists(list_a=None, list_b=None):
    """
    Create two lists of items from user input

    Args:
        list_a (list): Initial items for List A (optional)
        list_b (list): Initial items for List B (optional)

    Returns:
        tuple: Two lists (A and B), populated with items

    Raises:
        TypeError: If invalid items are provided
    """
    try:
        if list_a is None:
            list_a = []
        if list_b is None:
            list_b = []

        # Populate List A if not provided
        while True:
            item = input("Enter an item for List A (press Enter to stop): ")
            if not item:
                break
            list_a.append(str(item))

        # Copy List A to List B
        list_b = list(list_a)

        logging.info("Lists created successfully")
        return list_a, list_b

    except KeyboardInterrupt:
        print("\nUser interrupted input")
        sys.exit(1)
    except Exception as e:
        logging.error(f"Error creating lists: {str(e)}")
        raise


def count_preferences(comparison_results):
    """
    Count how many times each item was preferred

    Args:
        comparison_results (list): List of tuples from compare_items()

    Returns:
        dict: Dictionary mapping items to their preference counts

    Raises:
        ValueError: If invalid results are provided
    """
    try:
        if not comparison_results:
            raise ValueError("No comparison results provided")

        # Initialize count dictionary
        counts = {}

        for result in comparison_results:
            preferred_item = result[2]
            if preferred_item == 1:
                counts[result[0]] = counts.get(result[0], 0) + 1
            elif preferred_item == 2:
                counts[result[1]] = counts.get(result[1], 0) + 1

        return counts

    except Exception as e:
        logging.error(f"Error counting preferences: {str(e)}")
        raise

def compare_items(list_a, list_b):
    """
    Compare unique pairs of items between two lists and store preferences

    Args:
        list_a (list): First list of items
        list_b (list): Second list of items

    Returns:
        list: Results of comparisons

    Raises:
        ValueError: If lists are empty or mismatched
    """
    try:
        if not list_a or not list_b:
            raise ValueError("Both lists must contain items")

        results = []


        # Generate unique pairs (a, b) where a is from A and b is from B
        # Skip comparisons where items are the same or already compared in reverse order
        for item_a in list_a:
            for item_b in list_b:
                # Skip self-comparisons and reverse comparisons
                if item_a == item_b or item_a > item_b:  # Using '>' to sort alphabetically
                    continue

                try:
                    preference = input(f"Compare {item_a} vs {item_b}: "
                                       f"Enter 1 if you prefer {item_a}, "
                                       f"2 if you prefer {item_b}: ")

                    if not preference.isdigit():
                        print("Invalid input. Please enter 1 or 2.")
                        continue

                    results.append((item_a, item_b, int(preference)))

                except KeyboardInterrupt:
                    print("\nUser interrupted comparison")
                    return results  # Return what we have so far

    except Exception as e:
        logging.error(f"Error during comparison: {str(e)}")
        raise
    return results

if __name__ == "__main__":
    """
    Main program entry point with command line arguments
    """
    try:
        # Configure logging
        logging.basicConfig(
            level=logging.INFO,
            format='%(asctime)s - %(levelname)s - %(message)s',
            handlers=[logging.StreamHandler()]
        )

        # Get items from command line or user input
        if len(sys.argv) >= 2:
            list_a = [str(sys.argv[1]), str(sys.argv[2])]
        else:
            print("No command line arguments provided.")
            first_item = input("Enter the first item for List A: ")
            list_a = [first_item]

        list_b, _ = create_lists(list_a=list_a)

        results = compare_items(list_a, list_b)

        # Get preference counts
        preferences = count_preferences(results)

        print("\nComparison Results:")
        for res in results:
            print(f"Comparing {res[0]} vs {res[1]} - Preferred: {res[2]}")

        print("\nPreference Counts:")
        for item, count in preferences.items():
            print(f"{item} was preferred {count} times")

    except IndexError:
        # Handle cases where lists are too short
        print("Error: Not enough items provided. At least two items required")
        sys.exit(1)
    except KeyboardInterrupt:
        print("\nProgram interrupted by user")
        sys.exit(0)


# Unit Tests
# to run this, go the source directory venv/bin folder, and use source ./activate
# then python3 -m pytest script-name.py
def test_create_lists():
    """
    Test create_lists function with different scenarios
    """
    from unittest.mock import patch

    @patch('builtins.input')
    def test_default_case(mock_input):
        mock_input.side_effect = ['apple', 'banana', '', 'berry']
        list_a, list_b = create_lists()
        assert len(list_a) == 3
        assert list_b == list_a

    @patch('builtins.input')
    def test_single_item(mock_input):
        mock_input.side_effect = ['test', '']
        list_a, list_b = create_lists()
        assert len(list_a) == 1
        assert list_b == list_a


def test_compare_items():
    """
    Test compare_items function with various scenarios
    """


def test_count_preferences():
    """
    Test count_preferences function with various scenarios
    """
    from unittest.mock import patch

    @patch('builtins.input')
    def test_valid_comparison(mock_input):
        mock_input.side_effect = ['1', '2']
        results = compare_items(['a'], ['a', 'b'])
        assert len(results) == 1

    @patch('builtins.input')
    def test_invalid_input(mock_input):
        mock_input.side_effect = ['3', '1']
        results = compare_items(['a'], ['a', 'b'])
        assert len(results) == 1

    def test_count_preferences():
        """
        Test count_preferences function with various scenarios
        """
        from unittest.mock import patch

        @patch('builtins.input')
        def test_valid_comparison(mock_input):
            mock_input.side_effect = ['1', '2']
            results = compare_items(['a'], ['a', 'b'])
            preferences = count_preferences(results)
            assert len(preferences) == 1
            assert preferences.get('a', 0) == 1

        @patch('builtins.input')
        def test_multiple_comparisons(mock_input):
            mock_input.side_effect = ['2', '1']
            list_a = ['apple', 'orange']
            list_b = ['pear', 'tomato']
            results = compare_items(list_a, list_b)
            preferences = count_preferences(results)
            assert len(preferences) == 2
            assert preferences.get('pear', 0) == 1
            assert preferences.get('apple', 0) == 1

Thoughts on Aider

Well it has taken a little bit, but thanks to Getting Things Gnome! (GTG!), I installed Aider and Ollama and began some vibe coding. Prior to this, I have written data science code in Python and R and produced some GUI applications for Linux. I also developed some software that is in the Windows store, and produced software for the Windows desktop over the past 15 years or more. The applications that I have in the Microsoft store preceded the advent of the large language model coding assistants.

For the Aider Model, I am using Ollama and Deepseek r1 14B runing locally using the CPU. I have a 4 GB Geforce 1650 Super, which is not going to handle very much advanced neural net math. I have used GPT4ALL with unlimited CPU consumption and it caused the system to halt due to overheating. To prevent system overheating with GPT4ALL, I had to set a limit of 3 or 4 cores. Ollama has not done that. Ollama defaults to one thread per physical core, which is very helpful. The system runs run at the top of the thermal limit when waiting for Ollama/deepseek, but it works in a very stable manner. Many times it runs several degrees below the upper critical limits. This is also a function of my system itself which has upgraded CPU from the manufacturer’s installed one, while using the original CPU cooler due to space constraints.

My goal is to replace a very large spreadsheet that contained all of the items that I had on my wish list at a specific point in time. I had trouble knowing which item to handle first when there were competing priorities such as home maintenance, vehicle maintenance, vehicle luxury upgrades, vehicle necessities, and so on. I then selected between each item and counted the number of wins for each item. As an example, one item that was on my list for a while was a pack of respirators for working with sawdust, flakes, and similar debris. I had kept putting it off because the projects I was planning to use it with were always on a back burner yet to occur. However, it was one of the top picks because when compared against 59 other items, it received the most votes. One might say that was to be expected. Yet another suprise was the backup collection of motor oil so that rather than only having the next oil change worth of oil, I would now have the next two changes of oil.

I am not sure on what to do with the output code. I was thinking about putting it on GitHub but it seems that GitHub is turning into something akin to the single repository of software code online and the master of it all. That unsettles me somewhat, so once I get the initial version working I will consider my options.

My process broke down regarding search and replace blocks, and I had to revisit some earlier documentation of outputs. The way I do this simple. I save each revision in a timestamped notebook entry in either QOwnNotes or Gnote. Gnote is fast and easy, but QOwnNotes allows me to input images and draft these blog entries.