Berry Web Blog Feed Computers and programming. tag:vancan1ty.com/blog 2023-08-19T19:55:40Z The Case Against Lisp - And When Would I Choose Lisp Today? tag:53 Aug 19, 2023, 7:55:40 PM

I wrote an article back in 2015 entitled The Case for Lisp .  It lays out a simple argument, founded on expressiveness, for choosing Lisp.

Now I would like to lay out some reasons why one might not choose Lisp for a project. Focusing on Common Lisp for this article.

  1. The very s-expression syntax which enables structural macros which gives lisp some extra expressiveness power vs most other programming languages, also serves to make the code harder to read for the vast majority of people. Perhaps the flexibility of s-expression syntax is overweighed by the increased cognitive overhead required to read the code.  Even after I spent quite a lot of time reading and writing lisp over a period of several years, I would still find it a bit more challenging to read lisp code than to read, for example, java code.
  2. I think in the age of autocomplete and ide-assisted coding, lisp's OOP model and function comes before object mentality is less ergonomic than the dominant object.method paradigm.  Consider `(bark dog)` vs dog.bark() to see what I mean.  The latter syntax is extremely amenable to autocomplete in an IDE and provides some "namespacing" to the bark() method where it won't pollute the global autocomplete context but instead only comes up in the context of the dog object.
  3. Another original benefit of lisp s-expression syntax is the ease with which you can write data using the same syntax as code.  Lists are integral -- with constructs like plists and alists maps and general nested structures are also well supported.  I think other languages have caught up to lisp in this regard for practical purposes -- JSON is arguably better than s-expressions for expressing data, and is well integrated into languages like Javascript and Python.
  4. Lisp structural macros are a two-edged sword.  They can enable some neat tricks but these tricks can completely disrupt the control flow of the code and make things quite hard to understand.  For example the book "Let over Lambda" by Doug Hoyte contains numerous macro tricks that mess with scope, bindings, evaluation, etc.  If you use a lot of these techniques in your code you can greatly obfuscate your code which is not generally considered a benefit.  If you really need a DSL to model a specific problem, you can always create one using non-lisp languages as well, but you will likely need to make use of a parser generator like ANTLR.  For 99.9% of projects I would say the common data, function, and object abstractions are sufficient and Lisp's "structural macro" chops aren't needed or provide only a marginal advantage.
  5. I personally think people sometimes go overboard with the Lisp "form" syntax function chaining programming style, when it would be easier to understand if values were saved to named variables in intermediate steps along the way, as is encouraged by many other programming languages.  Other languages technically support function chaining just the same, but it would be frowned on to have some of the massive effective one-liners that are considered fairly idiomatic in Lisp.
  6. Common Lisp has lots of arcane and somewhat archaic core functions and functionality.. especially around manipulating cons cells and the like.  (car), (cadr), (assoc), (elt), vs (nth), and so on and so forth.
  7. Lisp syntax tricks can only get you so far when solving real-world problems.  When you compare Lisp programs to solve specified programming tasks -- like for instance the tasks given here -- Debian Benchmarks Game (Lisp) -- do you actually see lisp clocking in consistently with less code and more readable code than other less-theoretically "expressive" languages?  To be honest, no you don't.  In the real world, there is a certain amount of domain-driven complexity that will be present regardless of what language you write in, and I think it is rare that choosing lisp for its supposed expressiveness advantages will actually allow you to greatly reduce the complexity of modelling a problem.
  8. Finally, for me the last but perhaps most important factor to evaluate a language/runtime and it's limitations is, to consider what systems have been created in each language, and the characteristics of those systems. I'll write another article to touch on what I observe as the pros and cons of other languages/runtimes from this point of view, but for Lisp I will say that most lisp systems that I have interacted with feel a tad bit unpolished and indeed perhaps unfinished, unless you are a hardcore lisper who wants to interact with the system primarily or entirely in Lisp itself.  Systems like Emacs, LispWorks IDE, and McClim are some that i've used that I've felt this way about. I just don't see the level of polish, performance, and system integration that is commonly seen in C++-based software, for instance.

Common Lisp is a good language and has several good runtimes -- including SBCL.  Lisp had many of the benefits of Java/managed runtimes before Java even existed, including to my knowledge memory safety (in theory at least), a runtime that allows live modifications (to a greater extent than the jvm!  but this can be a footgun in practice), lower memory usage than the JVM (in my experience), similar runtime performance to the JVM, and full support for object oriented programming with the CLOS (if with a bit unusual syntax/mental model).

Coming out of college, Lisp was my favorite language.  I tried to shoehorn it into my work at the first few places I worked at, and used it for personal systems for our family as well. Over time however, it's mostly been displaced from my personal workflow, in part because of network effect of others using other languages but also because I felt I could do the job better in other languages more often. Python has become my choice for most interactive or exploratory programming, Java remains a good choice for building a structured business system, for me C is the language for low-level programming if it is needed. C++ when necessary or called for. 

I still see Lisp as having certain strengths.  It's good for starting exploratory code in an interactive style similar to python that can then be transitioned into a real product with better performance than python. I think Lisp probably has saner packaging than python.  It has a good story around profiling and making real-time changes to the code if necessary.

In conclusion, would I choose Lisp for a project today? 

For me personally I think I would be a bit unlikely to choose Lisp for a new project unless I just had an excess of time and wanted to tinker -- or if I had specific requirements that matched Lisp's strengths as I view them. 

Currell Berry
Testing FFI Hot Loop Overhead - Java, C#, PHP, and Go tag:52 Jul 15, 2023, 4:13:23 AM

I wanted to compare the FFIs of a few popular programming language to evaluate 1. the ease of use of the FFI functionality and 2. their minimal overhead in a hot-loop scenario. I chose java, c#, php, and golang for my experiment.

This is in no way representative of FFI usage in the general case -- just indicative of overhead of calling a minimal function.

The code for this experiment is here: https://gitlab.com/vancan1ty/ffihotloopexamples .  The JNI code was based off of this github repo by RJ Fang: https://github.com/thefangbear/JNI-By-Examples .

All tests are run on my Dell Precision M4800 laptop. I evaluate three scenarios across several languages.

Scenario 1 is as follows -- we loop up to LOOPMAX --  which I set to 2000000000L -- and perform a few simple operations including a bit of modulus arithmetic and conditional logic to keep the loop from being unrolled by the compiler.

        Utilities util = new Utilities();
        long st1 = System.currentTimeMillis();
        long counter = 0;
        while(counter < LOOPMAX) {
            counter = util.addOne(counter);
            if(counter % 1000000 == 0) {
                counter = counter + 10;
            }
        }
        long et1 = System.currentTimeMillis();

Scenario 2 is the same as scenario 1 -- but we perform all the operations directly in the programming language being tested -- no FFI is involved.  We can expect scenario 2 to run faster than scenario 1 -- and the difference in the performance between the scenarios is a measure of the overhead of calling out via FFI to the native library.

        long st2 = System.currentTimeMillis();
        long counter2 = 0;
        while(counter2 < LOOPMAX) {
            counter2 = counter2 + 1;
            if(counter2 % 1000000 == 0) {
                counter2 = counter2 + 10;
            }
        }
        long et2 = System.currentTimeMillis();

Scenario 3 delegates the entire hot loop to the natively implemented code.  You might expect this to be the fastest of all.

        long st3 = System.currentTimeMillis();
        long counter3 = util.loopToMax(0,LOOPMAX);
        long et3 = System.currentTimeMillis();

Java -- JNI

First comes the Java -- JNI implementation of the scenarios. 

$ java -version
openjdk version "17.0.1" 2021-10-19 LTS
OpenJDK Runtime Environment (build 17.0.1+12-LTS)
OpenJDK 64-Bit Server VM (build 17.0.1+12-LTS, mixed mode, sharing)

 Using RJ Fang's script to run the code that I added to the JNI-By-Examples yields the below performance results

sh jnihelper.sh --execute-java

Java Native Interface Helper
Created by RJ Fang
Options: ./jnihelper.sh
--refresh-header     Refreshes the header
--build              Tries CMake build
--execute-java       Tests library

t1: 22.415 to find 2000000010 (89225964.756 per sec)
t2: 3.29 to find 2000000010 (607902735.562 per sec)
t3: 5.889 to find 2000000010 (339616233.656 per sec)

So in this case, the pure java implementation is able to outperform even the C++ implementation by a decent margin -- even though the C++ implementation is compiled with -O3 as part of the cmake process.  Not sure why this is.

Note that the Java JNI testcase was kind of a pain to setup and I required the help of a the JNI Examples repo and associated scripts to get it working.  The code for the java cases is in java/in/derros/jni/Utilities.java .

The code for the C++ side of the java implementation is written in Utilities.cpp (building on the JNI Examples project structure)

/*
 * ==============IMPLEMENTATION=================
 * Class:     in_derros_jni_Utilities
 * Method:    addOne
 * Signature: (J)J
 */
JNIEXPORT jlong JNICALL Java_in_derros_jni_Utilities_addOne
  (JNIEnv * env, jobject obj, jlong valIn) {
    return 1+valIn;
  }

  /*
    * ==============IMPLEMENTATION=================
   * Class:     in_derros_jni_Utilities
   * Method:    loopToMax
   * Signature: (JJ)J
   */
  JNIEXPORT jlong JNICALL Java_in_derros_jni_Utilities_loopToMax
    (JNIEnv * env, jobject obj, jlong start, jlong max) {
      long counter2 = start;
      while(counter2 < max) {
          counter2 = counter2 + 1;
          if(counter2 % 1000000 == 0) {
            counter2 = counter2 + 10;
          }
      }
      return counter2;
    }

JNI Generates the funky looking method signatures and then you just have to copy the signatures and fill in the implementations in your own cpp file.  It's a bit weird but it does work.

PHP

FFI in PHP is a breeze, was able to write the following script to easily reproduce the java testcases above

addOne($counter);
            if($counter % 1000000 == 0) {
                $counter = $counter + 10;
            }
        }
        $et1 = microtime(true);

        $st2 = microtime(true);
        $counter2 = 0;
        while($counter2 < $LOOPMAX) {
            $counter2 = $counter2 + 1;
            if($counter2 % 1000000 == 0) {
                $counter2 = $counter2 + 10;
            }
        }
        $et2 = microtime(true);

        $st3 = microtime(true);
        $counter3 = $ffi->loopToMax(0,$LOOPMAX);
        $et3 = microtime(true);

        $tt1 = (($et1-$st1));
        $tt2 = (($et2-$st2));
        $tt3 = (($et3-$st3));
        print "t1: $tt1 to find $counter  (" . sprintf("%.3f",($LOOPMAX/$tt1)) . " per sec)\n";
        print "t2: $tt2 to find $counter2 (" . sprintf("%.3f",($LOOPMAX/$tt2)) . " per sec)\n";
        print "t3: $tt3 to find $counter3 (" . sprintf("%.3f",($LOOPMAX/$tt3)) . " per sec)\n";

?>

To complete the C side of the picture, I wrote the following simple header file (utility.h)

long addOne (long valIn);
long loopToMax(long start, long max);

and the following simple C implementation of the core logic (utility.c)

#include "utility.h"

   long addOne (long valIn) {
    return 1+valIn;
  }

  long loopToMax(long start, long max) {
      long counter2 = start;
      while(counter2 < max) {
          counter2 = counter2 + 1;
          if(counter2 % 1000000 == 0) {
            counter2 = counter2 + 10;
          }
      }
      return counter2;
    }

Compile the small c program to a shared object file that can be loaded by php with the following command

gcc -shared -O3 -o simple.so -fPIC utility.c

Note that I believe we are able to get somewhat better performance by disabling the -fPIC

Then we are able to benchmark

$ php -version
PHP 8.1.2-1ubuntu2.11 (cli) (built: Feb 22 2023 22:56:18) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.2, Copyright (c) Zend Technologies
    with Zend OPcache v8.1.2-1ubuntu2.11, Copyright (c), by Zend Technologies


$ php bench.php 
t1: 215.97937607765 to find 2000000010  (9260143.428 per sec)
t2: 37.147371053696 to find 2000000010 (53839610.806 per sec)
t3: 5.6606829166412 to find 2000000010 (353314260.744 per sec)

PHP is much slower than Java, by about a factor of 10.  Scenario 3 comes out to about the same runtime, as expected, as all the heavy listing is borne by the native code here.

C#

With C#, I found that it is essential to set Optimize to true in the .csproj file to get competitive performance.

dotnet --version 
7.0.202

Similarly to PHP, C# makes FFI a breeze.  Just load the .so file , define the signatures of the external functions, and then you can call the native functions directly.  Massively simpler than JNI.

using System;
using System.Runtime.InteropServices;

public class Bench
{
    // Import user32.dll (containing the function we need) and define
    // the method corresponding to the native function.
    [DllImport("../../../simple.so")]
    private static extern long addOne (long valIn);

    [DllImport("../../../simple.so")]
    private static extern long loopToMax(long start, long max);

    public static long CurrentMillis()
    {
       long milliseconds = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;
       return milliseconds;
    }

    public static long LOOPMAX = 2000000000;
    

    public static void Main(string[] args)
    {

        long st1 = CurrentMillis();
         long counter = 0;
         while(counter < LOOPMAX) {
             counter = addOne(counter);
             if(counter % 1000000 == 0) {
                 counter = counter + 10;
             }
         }
         long et1 = CurrentMillis();

         long st2 = CurrentMillis();
         long counter2 = 0;
         while(counter2 < LOOPMAX) {
             counter2 = counter2 + 1;
             if(counter2 % 1000000 == 0) {
                 counter2 = counter2 + 10;
             }
         }
         long et2 = CurrentMillis();

         long st3 = CurrentMillis();
         long counter3 = loopToMax(0,LOOPMAX);
         long et3 = CurrentMillis();

         double tt1 = ((et1-st1)/1000.0);
         double tt2 = ((et2-st2)/1000.0);
         double tt3 = ((et3-st3)/1000.0);
         Console.WriteLine("t1: " + tt1 + " to find " + counter + " (" + String.Format("{0}",(LOOPMAX/tt1)) + " per sec)");
         Console.WriteLine("t2: " + tt2 + " to find " + counter2 + " (" + String.Format("{0}",(LOOPMAX/tt2)) + " per sec)") ;
         Console.WriteLine("t3: " + tt3 + " to find " + counter3 + " (" + String.Format("{0}",(LOOPMAX/tt3))  + " per sec)");
    }
}

The performance is great for dotnet.

$ dotnet run
t1: 7.466 to find 2000000010 (267881060.8090008 per sec)
t2: 3.963 to find 2000000010 (504668180.6712087 per sec)
t3: 5.799 to find 2000000010 (344887049.4912916 per sec)

.Net 7 is able to run the FFI hot path in about a third of the time that Java 17 requires. .Net 7 is not nearly as fast as Java 17 when comparing the implementations written with no FFI, but it's in the same ballpark.  Scenario 3 is close between all the languages so far, as expected as the heavy lifting here is in native code.

I think the excellent FFI support of C# -- both usability and performance -- in tandem with the better support for value-oriented memory layouts are two of the secret weapons that give C# a big edge for applications that involve integrating with native or high performance code.  Java has good performance for a managed language but bogs down compared to C# when asked to talk to native code.  For example, Unity uses C# -- doubt Java would have been as successful in this role.   The "everything is an object" model of Java also likely hampers the interop with native libraries doing heavy math on image, video, or language modelling data -- I don't have proof of that but it's a hunch on my part.

These are two areas Java could stand to catch up with C# in.

Go

For go, I only implemented scenario 1, as that is the scenario that uses FFI the most.

I used cgos inline functionality to define the native function within the go file -- we could likely call out to simple.so as well but I haven't implemented that yet.

I timed this testcase on the command line like so

$ time go run bench.go 
2000000010

real    2m7.746s
user    2m7.812s
sys     0m0.240s

Go's FFI hot-loop performance is far worse than Java and C#.  I think this has something to do with the green-threads model that Go uses.

Java - Project Panama

Last, lets return to java to look at the new project Panama FFI Api.  This promises to reduce the effort to interact with native code using java (JNI is so clunky!).  For this test I downloaded the latest generally released Jdk build (at the time I ran my tests)

$ ~/software/jdk-20.0.1/bin/java -version
java version "20.0.1" 2023-04-18
Java(TM) SE Runtime Environment (build 20.0.1+9-29)
Java HotSpot(TM) 64-Bit Server VM (build 20.0.1+9-29, mixed mode, sharing)

Finally I can write code in java to call out to a C library with comparable complexity to the other languages in this comparison.  Note that I only implemented scenario 1 (ffi in the loop) using the panama approach.

package net.berryplace.testing;

import java.lang.foreign.Linker;
import java.lang.foreign.FunctionDescriptor;
import java.lang.foreign.SymbolLookup;
import java.lang.foreign.ValueLayout;
import java.nio.file.Paths;

import java.lang.invoke.MethodHandle;
import java.lang.invoke.MethodType;

public class JavaBench2 {

    private static final SymbolLookup libLookup;
    static String libPath = "simple.so";

    static {
        // loads a particular C library
        System.load(Paths.get("").toAbsolutePath().toString() + "/" + libPath);
        libLookup = SymbolLookup.loaderLookup();
    }

    public static MethodHandle getaddOneMethod() {
        Linker linker          = Linker.nativeLinker();
        var addOneMethod = libLookup.find("addOne");

        if (addOneMethod.isPresent()) {
            var methodReference = linker
                    .downcallHandle(
                            addOneMethod.get(),
                            FunctionDescriptor.of(ValueLayout.JAVA_LONG, ValueLayout.JAVA_LONG)
                    );

            return methodReference;

        }
        throw new RuntimeException("addOne function not found.");
    }

    static long LOOPMAX = 2000000000L;

    public static void main(String[] args) throws Throwable {
        MethodHandle addOne = getaddOneMethod();

        long st1 = System.currentTimeMillis();
        long counter = 0;

        while(counter < LOOPMAX) {
            counter = (long) addOne.invoke(counter);
            if(counter % 1000000 == 0) {
                counter = counter + 10;
            }
        }
        long et1 = System.currentTimeMillis();

        double tt1 = ((et1-st1)/1000.0);
        System.out.println("t1: " + tt1 + " to find " + counter + " (" + String.format("%.3f",(LOOPMAX/tt1)) + " per sec)");
    }
}

This isn't quite as slick as the C# approach but what it lacks in slickness it makes up for in being very composable and structured. The panama team has also built something called jextract that further simplifies calling in to foreign libraries -- I haven't tried that yet -- i'm pretty pleased with the default Panama API on its own.

Note that the above calls in to the simple.so object we built earlier for the php FFI test.

From the directory containing the class file, we can compile and run like so using java 20

~/software/jdk-20.0.1/bin/javac --release 20 --enable-preview JavaBench2.java 
(cd ../../../ && ~/software/jdk-20.0.1/bin/java --enable-preview net/berryplace/testing/JavaBench2 )

Resulting in the below output on my machine

t1: 27.383 to find 2000000010 (73038016.287 per sec)

While the Panama API is much simpler to use, the bad news is that the performance is even worse than the standard JNI api for this usecase.  C# is much faster. 

Conclusions

Here is a table containing all the benchmark results:

  Java – JNI PHP C# Go Java – Panama
Scenario 1 (FFI Hotloop) 22.415 215.979 7.466 127 27.383
Scenario 2 (Pure) 3.29 37.147 3.963    
Scenario 3 (Offloaded) 5.889 5.661 5.799    

 

C# and PHP tie for ease of use.  C# is far away the winner in terms of FFI performance.  Java is last for ease of use with JNI, but competitive with the project Panama FFI api.  However the project Panama FFI api regresses in terms of performance.  Java FFI performance is middling.  Go and PHP bring up the rear on FFI performance -- for PHP I think it's not so much that the FFI is slow but that PHP itself is a slower language so more time is spent on the PHP side when running the hot loop.

 

 

Currell Berry
Remotely Modifying a Running Program Using Swank tag:30 Jun 15, 2023, 2:15:09 AM

One of the strengths of Common Lisp is that it includes support for dynamically redefining classes and functions at run-time, while keeping data intact. Theoretically at least, this should enable support for keeping programs continually running in production while changing pieces of them – "Common Lisp Recipes" by Edi Weitz includes the following disclaimer about this functionality

If you'e ever talked to experienced Lispers, you've probably heard "war stories" of huge and complex systems to which substantial modifications were applied while they kept running and without interrupting the services they provided. Although this sometimes has to be taken with a grain of salt, it is in fact true that many COMMON LISP features were designed from the ground up to be dynamic in the sense that they can be changed at run time. This includes CLOS, where an object can change from one class to another, and where classes can be modified, although they already have objects "hanging off" of them.

– "Common Lisp Recipes" by Edi Weitz, section 13-8

My understanding is that some of this functionality at least is difficult/nonstandard to replicate in other languages such as Python and Java (please feel free let me know that I am wrong if that is the case!).

Anyway, I would like to remotely interact in lisp with remote instances of my current project I am working on – dumb-mq – so I figured it would be helpful to start with a small example of remote connection/redefinition.

This example uses sbcl and SLIME-mode in emacs, but it should work for other lisp implementations and other tools that support the swank protocol as well. The easiest way to get SBCL and emacs set up that I am aware currently of is to download the excellent Portacle IDE by Nicolas Hafner. Alternatively just figure out how to install them yourself.

Write the following lisp file: swankdemo.lisp

;; a little common lisp swank demo
;; while this program is running, you can connect to it from another terminal or machine
;; and change the definition of doprint to print something else out!
;; (ql:quickload :swank)
;; (ql:quickload :bordeaux-threads)

(require :swank)
(require :bordeaux-threads)

(defparameter *counter* 0)

(defun dostuff ()
  (format t "hello world ~a!~%" *counter*))

(defun runner ()
  (bt:make-thread (lambda ()
		    (swank:create-server :port 4006)))
  (format t "we are past go!~%")
  (loop while t do
       (sleep 5)
       (dostuff)
       (incf *counter*)
       ))

(runner)

You can run this program as follows:

sbcl --load swankdemo.lisp

The program will run indefinitely, printing out a message every five seconds and incrementing its counter. Imagine instead that this was program was accepting connections indefinitely and was providing an important service.

By default swank will accept connections only from localhost – if you would like to connect from a different computer you can use ssh tunneling to forward the port on the remote machine to a port on your local computer. For example

ssh -L4006:127.0.0.1:4006 username@example.com

will securely forward port 4006 on the server at example.com to your local computer's port 4006.

Let's connect to the program. Fire up emacs, type M-x slime-connect, at the prompts select 127.0.0.1 (the default) and port 4006 (type this in). If all went well, you are now connected to the remotely running lisp program! Just to check, see if you can retrieve the current value of the counter:

CL-USER> *counter*

Now let's say you want to change the definition of dostuff and reset the counter while you are at it. Type in the following either in an emacs scratch buffer, select it, and send it to the remote lisp program using M-x slime-eval-region (or an alternate method).

(defun dostuff ()
  (format t "goodbye world ~a!~%" *counter*))
(setf *counter* 0)

Observe swankdemo's output in the console – you will see the output change and the counter be reset. Success!

You can do more complicated redefinitions and changes – refer to the Common Lisp Standard (draft) section 7.2 and 7.3 for some information on modifying objects at run-time.

Currell Berry
How to Designate Single Window for Popup Buffers in Emacs tag:31 Jun 15, 2023, 2:04:55 AM

This blogpost is inspired by the approach found here.

One of the things that used to annoy me about programming in emacs with SLIME mode (common lisp), is that SLIME would frequently choose to open up a popup buffer in one of the windows I was trying to use for some other task. For instance, various actions in SLIME will open up a Completion Buffer, Debugger Pane or an Inspection Buffer. I eventually realized that what I really wanted was to designate a given window where all emacs popups would open by default, so my train of thought in the other windows can remain undisturbed. Below is some Emacs Lisp code that enables this functionality:

(defun berry-choose-window-for-popups ()
  "run with your cursor in the window which you want to use to open up 
   all popups from now on!"
  (interactive)
  (set-window-parameter (selected-window) 'berrydesignated t)
  (berry-setup-popup-display-handler))

(defun berry-setup-popup-display-handler ()
  "adds an entry to display-buffer-alist which allows you to designate a window 
   for emacs popups. If the buffer is currently being displayed in a given 
   window, it will continue to use that window. Otherwise, it will choose your 
   designated window which should have been already set."
  (interactive)
  (add-to-list 'display-buffer-alist
	       `(".*" .
		 ((display-buffer-reuse-window
		   berry-select-window-for-popup
		   display-buffer-in-side-window
		   )
		  .
		  ((reusable-frames     . visible)
		   (side                . bottom)
		   (window-height       . 0.50)))
		 )))

(defun berry-select-window-for-popup (buffer &optional alist)
  "Searches for the a window which the 'berrydesignated parameter set.
    Returns the first such window found. If none is found, returns nil."
  (cl-block berry-select-window-for-popup
    (let* ((winlist (window-list-1 nil nil t))
	   (outindex 0))
      (while (< outindex (length winlist))
	(let ((candidate (elt winlist outindex)))
	  (if (eql t (window-parameter candidate 'berrydesignated))
	      (progn
		(set-window-buffer candidate buffer)
		(cl-return-from berry-select-window-for-popup candidate)))
	  (cl-incf outindex)
	  ))
      nil)))

(defun berry-clear-popup-setting ()
  "clears the 'berrydesignated flag on all windows, thus removing the designation 
   of any given window to host popups. also removes the popup handler registration"
  (interactive)
  (cl-loop for window in (window-list-1 nil nil t) do
	   (set-window-parameter window 'berrydesignated nil))
  (pop display-buffer-alist)
  )

My usual window layout when programming in Emacs looks like the following (note that in emacs a window is more like a frame in most other environments):

+-----------------+
|        | second |
|        | code   |
|primary | window |
|code    |--------|
|window  | REPL & |
|        | POPUPS |
|        |        |
+-----------------+

So what I do after opening all the windows I want is I put my cursor in the "REPL & POPUPS" window and run berry-choose-window-for-popups. The content of my other windows remains undisturbed by IDE functions unless I tell the computer to change buffers in one of those windows.

Currell Berry
Criteria for Software Freedom tag:33 Jun 21, 2023, 2:23:08 AM

"Free software" means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, "free software" is a matter of liberty, not price

– Free Software Foundation

1 Introduction

The FSF's definition of free software, written above, is a useful broad principle which relates to how much control a human user has over a given computer program. In this article, I discuss some specific criteria which we can use to assess "Software Freedom".

I think the typical definition which people think of when they think "free software" is simply whether the software is open-source, or perhaps even if the software costs nothing to use. However, while these users are correct for some definition of "free software", I, like the FSF, think it can be useful to load the term "Free Software" with more implications in order to better capture the nature of the relationship of a human user to a piece of software. What is important is not really what the legal status of a piece of code is, but rather the practical level of control which a user has over that code. In this era of ever-increasing computer technology, I think it becomes more and more important that humans can control the computations which they use, and not the other way around.

I propose three criteria which I think are especially relevant today in our age of cloud services and increasingly complex software. My goal is to assess the level of control of the human user over a piece of software which she uses. The first criterion is simply the basic definition of open-source, the second criterion is mostly implied by Stallman's definition of "Free Software", and the second definition is not so directly implied. My criteria are:

  • Availability of source-code.
  • Control over deployment
  • Accessibility to understanding.

2 Availability of source-code

This criterion is probably what most people when they think of "free software" or "open-source software". Whether a piece of software is run remotely or locally, having access to the source-code can provide a user great insight into what a piece of software is actually doing. Availability of source-code often corresponds to the user having greater "control over deployment". This criterion is the one which is addressed by the various open-source software licenses.

3 Control over Deployment

Control over deployment is a basic precondition to control over computation. If the computer user cannot control when an application starts, stops, and is modified, then the user cannot say that he controls the computation which is being done. Many of today's cloud web-apps run afoul of this criterion – not only are most of them closed-source and closed-data, but they can and will be discontinued when the provider decides without recourse for the user. Cloud services are discontinued all the time – see this page or this page for some examples.

I define three levels of "control over deployment":

  1. The user does not run the software, and does not have access to enough information about the infrastructure, source, and data of the project to "fork" the deployment of the software to run it herself.
  2. The user does not actually run the software on her computer, but if the service is ever discontinued or negatively modified the user has enough information to "fork" the deployment of the software and run it herself.
  3. The user controls execution of the software herself.

Level 0 is the default level of control over most web-apps. Google Translate is an example of a service I classify at this level. If Translate is ever discontinued, I have no ability to bring it back. I do not have access to the source code for google translate, and cannot know much about the infrastructure or methodology used to run it. Google Translate is a useful service, but it would be so much more useful to me, a hacker, if I could spin up my own version of "Translate" with my own changes.

Level 1 is the level of control afforded by most hosted open-source software installations. Sage Cloud is an example of a service in this category. While Sage Cloud is an online service for which I do not directly control deployment, Sage itself is open-source software, and I can easily spin up my own "Sage" server with most features intact. Level 1 has many benefits over Level 0 for interested computer-users, not least among them that the user can study the implementation of the service to potentially improve it and change it to match his own purposes.

Level 2 is the strongest level of control over computation. In Level 2, the user controls the computer on which the software runs, and can choose how it is used and when it is upgraded. Level 2 is the level corresponding to traditional desktop software. Even closed-source software provides a fair amount of control to users when it is run locally – the user has a perpetual ability to run the software, and the service cannot be removed without recourse. Additionally, the user can potentially reverse engineer and/or add extensions to even closed-source software as long as he has access to the binary.

Level 2 is obviously stronger than Level 1, but I think the Level 1 is still an important step over the default zero control level of typical cloud services.

4 Accessibility to Understanding

Accessibility to understanding is another criterion which has important implications for the practical level of control of humans over their computers.

Consider a large piece of computer software whose source is distributed in machine code, without comments, under the GPL. Technically, it is open source. If you run a disassembler on it you can get assembly code, and change it to your liking. While the software may be open-source, it will likely take you a very long time to figure out how the software works and modify it to match your own goals. Therefore, your practical level of control over the software is much smaller than it could be if you had access to the source code in a high-level language. Here we see that it is not merely the legal status of the source of a piece of software which determines your control over it, but also the technical status of that source.

One side-effect of the "Accessibility to Understanding" principle is that it sometimes indicates that, indeed, "worse" can be "better" for humans. If you are confronted with a 1-million line program which has marginally better performance at some problem than a 1000 line program, if you are like me you will probably opt to use and hack on the 1000 line program.

5 Conclusion

In this article, I discussed three criteria which I think are useful for assessing how much control a user has over a piece of software. The first condition is plainly evident to most people, but I think the other two criteria are less talked about and used.

Modified 2018-08-01

Currell Berry
When to use TeX vs Org-Mode vs OpenOffice tag:34 Jun 15, 2023, 1:59:07 AM

Originally posted 2016-02-09

There are a number of tools out there which allow you to compose documents. Three of my favorites are TeX, Emacs org-mode, and OpenOffice. Each of these tools is open-source and allows the user to script and modify their experiences.

Below are some factors which I think are helpful to consider when choosing between these document-preparation tools.

Use TeX when:

  1. You want output with very high quality appearance.
  2. You want to take advantage of TeX's powerful layout algorithms and routines.
  3. You want to typeset a bunch of mathematics.
  4. You want the document to be version-controlled using Git or similar SCM system.

Use Emacs org-mode when:

  1. Content is king, and you don't at this stage want custom layout.
  2. You don't need access to the underlying layout engine.
  3. You want to enter content, including mathematics, in a distraction-free and straightforward way.
  4. You want easy export to LaTeX, PDF, and HTML.
  5. You want the document to be version-controlled within Git or similar SCM system.

Use OpenOffice/LibreOffice when:

  1. Ease of composition is more important than highly polished end-product.
  2. You are content with fairly standard and simple layout conventions – and don't require pixel-perfect control or algorithmic layout optimization.
  3. Mathematical typesetting is not that important to the document.
  4. You need to edit the document in conjunction with other users who are not technical and do not know TeX.
  5. You want integration with OpenOffice Calc (spreadsheet).
  6. The document does not need to be in version control.
Currell Berry