Opinion -- Berry Web Blog -- page 0

Criteria for Software Freedom

Posted: 2016-03-15. Modified: 2017-08-01. Tags: Opinion, philosophy, programming.

“Free software” means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, “free software” is a matter of liberty, not price

Free Software Foundation

1 Introduction

The FSF's definition of free software, written above, is a useful broad principle which relates to how much control a human user has over a given computer program. In this article, I discuss some specific criteria which we can use to assess "Software Freedom".

I think the typical definition which people think of when they think "free software" is simply whether the software is open-source, or perhaps even if the software costs nothing to use. However, while these users are correct for some definition of "free software", I, like the FSF, think it can be useful to load the term "Free Software" with more implications in order to better capture the nature of the relationship of a human user to a piece of software. What is important is not really what the legal status of a piece of code is, but rather the practical level of control which a user has over that code. In this era of ever-increasing computer technology, I think it becomes more and more important that humans can control the computations which they use, and not the other way around.

I propose three criteria which I think are especially relevant today in our age of cloud services and increasingly complex software. My goal is to assess the level of control of the human user over a piece of software which she uses. The first criterion is simply the basic definition of open-source, the second criterion is mostly implied by Stallman's definition of "Free Software", and the second definition is not so directly implied. My criteria are:

  • Availability of source-code.
  • Control over deployment
  • Accessibility to understanding.

2 Availability of source-code

This criterion is probably what most people when they think of "free software" or "open-source software". Whether a piece of software is run remotely or locally, having access to the source-code can provide a user great insight into what a piece of software is actually doing. Availability of source-code often corresponds to the user having greater "control over deployment". This criterion is the one which is addressed by the various open-source software licenses.

3 Control over Deployment

Control over deployment is a basic precondition to control over computation. If the computer user cannot control when an application starts, stops, and is modified, then the user cannot say that he controls the computation which is being done. Many of today's cloud web-apps run afoul of this criterion – not only are most of them closed-source and closed-data, but they can and will be discontinued when the provider decides without recourse for the user. Cloud services are discontinued all the time – see this page or this page for some examples.

I define three levels of "control over deployment":

  1. The user does not run the software, and does not have access to enough information about the infrastructure, source, and data of the project to "fork" the deployment of the software to run it herself.
  2. The user does not actually run the software on her computer, but if the service is ever discontinued or negatively modified the user has enough information to "fork" the deployment of the software and run it herself.
  3. The user controls execution of the software herself.

Level 0 is the default level of control over most web-apps. Google Translate is an example of a service I classify at this level. If Translate is ever discontinued, I have no ability to bring it back. I do not have access to the source code for google translate, and cannot know much about the infrastructure or methodology used to run it. Google Translate is a useful service, but it would be so much more useful to me, a hacker, if I could spin up my own version of "Translate" with my own changes.

Level 1 is the level of control afforded by most hosted open-source software installations. Sage Cloud is an example of a service in this category. While Sage Cloud is an online service for which I do not directly control deployment, Sage itself is open-source software, and I can easily spin up my own "Sage" server with most features intact. Level 1 has many benefits over Level 0 for interested computer-users, not least among them that the user can study the implementation of the service to potentially improve it and change it to match his own purposes.

Level 2 is the strongest level of control over computation. In Level 2, the user controls the computer on which the software runs, and can choose how it is used and when it is upgraded. Level 2 is the level corresponding to traditional desktop software. Even closed-source software provides a fair amount of control to users when it is run locally – the user has a perpetual ability to run the software, and the service cannot be removed without recourse. Additionally, the user can potentially reverse engineer and/or add extensions to even closed-source software as long as he has access to the binary.

Level 2 is obviously stronger than Level 1, but I think the Level 1 is still an important step over the default zero control level of typical cloud services.

4 Accessibility to Understanding

Accessibility to understanding is another criterion which has important implications for the practical level of control of humans over their computers.

Consider a large piece of computer software whose source is distributed in machine code, without comments, under the GPL. Technically, it is open source. If you run a disassembler on it you can get assembly code, and change it to your liking. While the software may be open-source, it will likely take you a very long time to figure out how the software works and modify it to match your own goals. Therefore, your practical level of control over the software is much smaller than it could be if you had access to the source code in a high-level language. Here we see that it is not merely the legal status of the source of a piece of software which determines your control over it, but also the technical status of that source.

One side-effect of the "Accessibility to Understanding" principle is that it sometimes indicates that, indeed, "worse" can be "better" for humans. If you are confronted with a 1-million line program which has marginally better performance at some problem than a 1000 line program, if you are like me you will probably opt to use and hack on the 1000 line program.

5 Conclusion

In this article, I discussed three criteria which I think are useful for assessing how much control a user has over a piece of software. The first condition is plainly evident to most people, but I think the other two criteria are less talked about and used.

When to use TeX vs Org-Mode vs OpenOffice

Posted: 2016-02-09. Modified: 2016-02-09. Tags: Opinion, computer_usage.

There are a number of tools out there which allow you to compose documents. Three of my favorites are TeX, Emacs org-mode, and OpenOffice. Each of these tools is open-source and allows the user to script and modify their experiences.

Below are some factors which I think are helpful to consider when choosing between these document-preparation tools.

Use TeX when:

  1. You want output with very high quality appearance.
  2. You want to take advantage of TeX's powerful layout algorithms and routines.
  3. You want to typeset a bunch of mathematics.
  4. You want the document to be version-controlled using Git or similar SCM system.

Use Emacs org-mode when:

  1. Content is king, and you don't at this stage want custom layout.
  2. You don't need access to the underlying layout engine.
  3. You want to enter content, including mathematics, in a distraction-free and straightforward way.
  4. You want easy export to LaTeX, PDF, and HTML.
  5. You want the document to be version-controlled within Git or similar SCM system.

Use OpenOffice/LibreOffice when:

  1. Ease of composition is more important than highly polished end-product.
  2. You are content with fairly standard and simple layout conventions – and don't require pixel-perfect control or algorithmic layout optimization.
  3. Mathematical typesetting is not that important to the document.
  4. You need to edit the document in conjunction with other users who are not technical and do not know TeX.
  5. You want integration with OpenOffice Calc (spreadsheet).
  6. The document does not need to be in version control.

Fundamental Principles in Writing Computer Code

Posted: 2016-01-25. Modified: 2016-01-25. Tags: programming, Opinion, philosophy.

1 Introduction

There are a huge variety of programming languages out there, an infinite number of concepts, and numerous ways to think about any given problem. How can one logically discriminate among these choices to make wise programming decisions?

First off, there exist a number of software-design camps, each with their own design methodoligies and folk-loreish guidelines.

Some people claim that you should thoroughly document your code with comments, others claim that you should use comments sparsely lest they become out of date and misdirect future readers.

Most people think it's important that programmers "write understandable code." But how does one define what is understandable and what is not? Is it better for code to be concise and dense (ala APL) or verbose and wordy (ala COBOL)? Some people argue that verbose code is easier to understand, as it can sometimes be "self-documenting", while others claim that dense code is easier to read, as it is shorter.

Object Oriented designers love applying acronyms such as SOLID and GRASP to software. Functional designers love applying principles and techniques such as immutability, declarativity, currying, and higher-order functions.

Every programmer has a preference for a certain programming language – probably the one which she has spent the most time with. People get into passionate debates comparing languages and software ecosystems.

Almost all programmers seem to have internalized the concept of "code smell" – we say things like "that code doesn't seem quite right" and "in Java, it's best practice to implement that feature differently".

2 What are the Objective Criteria?

All of the above statements, standing on their own, are subjective in nature. Is there a provable basis for any opinion with regards to software design?

My answer is that there are a number of objective mathematical criteria which affect software design, but that actually applying these criteria to real world problems involves making subjective judgements, just as everything in the real world involves making subjective judgements.

Below are some core objective principles which can be used to assess the runtime quality of software relative to criteria. The runtime principles are commonly discussed, I do not think my sourcecode principle is commonly expressed, in the form I express it at least.

3 Runtime Principles

3.1 Correctness

This principle is, essentially tautologically, the most important principle w.r.t software implementation.

Given a set of goals G, I define correctness for an algorithm A as whether or not A successfully produces the right output and invokes the correct side effects to satisfy G. There can be subjectivity in creating and assessing the goals. Undecidability is a potential choking point for many algorithms, and must be handled in the definition of G.

3.2 Runtime Guarantees

My definition of this principle is different than my definition of correctness, in that, rather than assessing merely whether the correct output/side effects are produced within standard execution of the code, we are also assessing guarantees and/or probabilities of runtime "performance" in the areas of reliability, space, and time usage.

Examples of features which I would categorize under this feature include uptime probability guarantees, fault-tolerance, bounded response time guarantees, and bounded space usage guarantees.

I do not include "runtime computational complexity" (asymptotic measures of how the performance of an algorithm changes as the size of its input changes) in my definition of "Runtime Guarantees".

3.3 Runtime Complexity

"Complexity" in various forms is a core topic of Computer-Science curricula, and is extensively discussed and researched. In computer science, we typically consider runtime time complexity and space complexity as primary criteria of the quality of an algorithm. One can also consider concepts such as runtime "energy complexity" as a loose concept defining bounds on likely energy usage.

4 Sourcecode Principles

Each of the above principles has been extensively discussed in academic and engineering literature. However, notice that NONE of the principles above say anything about how we should organize or write our code. Below I put forward a principle which I think is the core principle affecting code organization and writing.

4.1 Occam's Razor – Minimum Description Length

4.1.1 Introduction

Occam's Razor has time and time again shown its power in choosing models to explain phenomena. It can be shown to be naturally emergent in a wide range of fields including Bayesian probability, natural philosophy, and Theology.

Occam's Razor is simply the statement that "the simplest explanation is most likely to be the best."

4.1.2 Justification

The most convincing mathematical argument for Occam's Razor which I have seen comes an analysis (link to Chapter 28 of "Information Theory, Inference and Learning Algorithms" by David Mackaye) in Bayesian probability which concludes that "simpler" explanations in a given explanation-space have higher posterior probability after any given observation.

4.1.3 Bridge to Software

But how does Occam's Razor apply to software development? After all, its commonly stated purpose is in model selection.

I think the bridge from Occam's Razor for model selection to Occam's Razor for code organization is recognizing that choosing among code organization options is a form of model selection.

The phenomena we are trying to explain (match) are the runtime requirements of the software system. The models are the different code layouts and structures which we can choose among.

Software development is about solving problems with computers. Therefore, we can view the program as a model estimating the desired solution to the underlying problem. If two models (programs) perform roughly the same function, then we can choose between them by the one which more "sharply" models the problem, e.g. the one which is shorter by the principle of Occam's razor.

4.1.4 Application

If you agree that Occam's Razor is a reasonable principle to direct code composition, and even perhaps that it encompasses such commonly uses principles such as DRY (don't repeat yourself) and YAGNI (you ain't gonna need it), then the next question is: how can we apply Occam's Razor to make code organization choices?

Applying the principle, of course, is where we run into some trouble and must deal with subjectivity. In order to apply Occam's Razor to software, I propose that we look at Occam's Razor from a slightly different perspective which is often used in machine learning and statistics. Let's make more exact what we mean by "simpler" and instead discuss "minimum description length".

To compare two pieces of code with the same functionality, we can usually say that the code which has a smaller minimum description length more sharply models the desired feature set.

How shall we compare description length. Shall we compare lines of code? Bytes? Number of tokens? Function points?

I think each of the above approaches can be useful, but for most applications number of bytes or number of lines should be relatively interchangeable, when used together with common-sense.

Another alternative, which may be more useful to compare programs written in different languages or with different style conventions, is to compress the text of the program and compare the compressed size. This gets us closer to the "minimum description length" of the program and helps elminate differences caused by whitespace, descriptive variable names, and the like.

5 Conclusion

In this paper, we discussed a number of principles for writing software which are objective in their core nature and are based in mathematics. We discussed the standard principles which can be used to assess a program's runtime characterestics. In addition to those, however, we also discussed how the "Minimum Description Length Principle" can be used to make choices about code organization and design. Not only does the "Minimum Description Length Principle" encompass other commonly used principles such as "DRY" and "YAGNI", but it also provides a general framework to assess the "sharpness" of a codebase's match to a given problem. For each of the principles we discussed, we touched on the various difficulties in applying it to real-world problems.

While the principles discussed in this paper do not change the subjective nature of software development, they do correspond to core features of a software program that can be measured objectively or pseudo-objectively, and that can be strongly supported by simple arguments in mathematics and philosophy.

Prognostication vs Software Development: Subjectivity and Complexity

Posted: 2016-01-25. Modified: 2016-01-25. Tags: Opinion, programming, philosophy.

"Crede, ut intelligas" – "Believe that you may understand"

– Saint Augustine, Sermon 43.7

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

– Albert Einstein, Geometry and Experience

1 Uncertainty in Life

Underlying every experience in life is uncertainty. It is in general impossible to obtain complete confidence about anything.

When a human walks through a room, she can instantly filter out irrelevant details. The reason that she can filter out irrelevant details is that we as humans subconsciously accept certain truths and foundational beliefs. Without making these operational assumptions, we would have to consider an untold number of possibilities at any given point in time and we would not be able to function in the real world.

Examples of details which we safely disgard include:

  1. The ceiling is not going to fall on me, I do not need to analyze its structural integrity.
  2. The pattern of light on the wall is still, therefore it is unlikely that it is a threat to me but rather is just light shining through the window.
  3. The laws of gravity, force, and momentum will continue to stay in effect.

Subconsciously accepting these beliefs and many others about how our world functions allows us to focus on important threats and goals, such as

  1. My little sister just stuck out her leg to trip me.

While it is easy for us to filter out extraneous details and sensory inputs as we go about our daily lives, this is by NO MEANS easy for automated systems. Computers, unless they are programmed quite cleverly, are prone to get bogged down in the "combinatorial explosion" of possibilities which real-world inputs and problems tend to create. Computers' difficulty with real-world problems has prompted the creation of a saying known as "Moravec's Paradox":

"it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility" – Hans Moravec, 1988, Mind Children

Anything which is done following a rigid set of rules can often be automated comparatively easily, while tasks, however simple, which involve subjective perceptions can often strongly resist automation.

2 Uncertainty/Subjectivity in Software Engineering

Most human occupations make use of our ability to navigate ambiguity, and software "engineering" is no exception.

Modern software development is essentially climbing through the apparatus of an immensely complex logical system and making appropriate modifications to produce a desired result. At every step, ambiguity and intuition rears its head. We face ambiguity in our objectives, subjectivity at every implementation step toward meeting those objectives, and immense ambiguity in every interpretation of human communication.

I think that much of "software engineering" is quite subjective. In the design phase of a project, people with different backgrounds and skillsets can often approach the problem with vastly differing methodologies and architectures, but no one methodology can be proven to be "correct". Programmers can often achieve similar results using any several of a variety of programming paradigms, languages, and planning approaches. Whether it is best to design in a functional style with Immutable and Declarative design principles, in Object-Oriented style with SOLID design principles, or a procedural style with close affinity to the hardware, depends on the problem at hand and the prior experience of the programmers. The goals for a project may include hard boundaries such as reliability and performance guarantees, and perhaps runtime-complexity bounds for the behavior of the program, but the organization of the code which gets us to the end-result is in large part a matter of taste and discernment. It all comes down to achieving the desired external behavior in an acceptable amount of time. Regardless of the specific design choices the designer makes, what is critical is that the designer is able to intuitively grasp the problem and the interacting actors which affect implementing the problem, and can swiftly navigate the logic maze towards a workable solution. In other words, a software designer must be able to intuitively navigate complexity as well as subjectivity in order to produce a successful software design.

3 Complexity

If a planner cannot effectively grasp the interacting pieces going into a planned software system, but instead is overwhelmed by the software's complexity, the planner's designs and estimates are likely to be quite poor.

The issue of managing complexity is a bridge from the skillset of software development to the skillset of many other fields, and appears to be a core human mental skill. Just as it is difficult to deliver quality software in a system for which you do not have an effective mental model, it is difficult for people to deliver any reasonable insight on real-world systems for which they do not have an effective mental model.

The classic case of a system which is too complex for people to gain reasonable insight on in general is… any system for which a person is trying to predict a future state.

It seems that once a system reaches a certain minimum complexity level, its specific behavior becomes completely unpredictable over time without actually observing the system to see what will happen. A New Kind of Science by Steven Wolfram popularizes this idea as the "Principle of Computational Equivalence" –

"Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication". – (A New Kind of Science, page 717).

Wolfram supports his Principle with analysis of hundreds of systems and processes from a diverse set of domains, and claims important implications for the Principle. The most obvious implication is that, once a system has reached a certain level of complexity, trying to predict its future by using computational shortcuts, formulas, or intution, will very often fail.

I think Wolfram's Principle of Computational Equivalence closely matches our experience with trying to predict the future and extrapolate the past within human society. Prognosticators have long claimed to be able to foresee future events, but they have a poor track record indeed. Ask any economist what the state of the economy will be, any MBA what company's stock will be high, or any professor of world affairs what the major flashpoints for world conflict will be, in only a few short years, and while they may very well give you their opinion, and wax eloquent about technical details in their field and complex models which they may consult, they will also almost certainly be wrong. Just as most economists incorrectly predicted financial stability through the period of 2007/2008 ( http://knowledge.wharton.upenn.edu/article/why-economists-failed-to-predict-the-financial-crisis/), most mutual-fund managers will find that they cannot beat the odds of the market for long, but will instead regress to the mean in the long run (for proof, compare the long-term returns of the largest mutual funds relative to plain Exchange Traded Funds).

The difficulties which arise when attempting to predict the future are very similar to the previously mentioned difficulties with complexity which often arise in software development. Prognosticators attempt to predict the outcome of an immensely complex system (the world) with myriad free variables. Because the world is more complex than any person can make an effective mental model of (besides using various heuristics which may or may not appear correct in hindsight), the track record of people trying to predict what will happen in the future is extremely poor.

Similarly, in software development we work with very complex systems. In some respects, software development could be considered less complex than international politics, but in other respects, it could be considered more complex. Modern software development involves coordinating the execution of billions of networked logic units over time. While you may be able to largely understand the physics for the operation of a single transistor within a computer (complete with various quantum and traditional analog phenomena) you can certainly not completely understand or predict the operation of a machine with billions of interacting transistors (which is what today's computers are). Rather, you take certain assertions about your machine on faith, and assign them a very high degree of confidence. When something goes wrong with your computer, then, your limited knowledge about how your computer works allows you to assign an informative prior over the sample space of possible issues with your computer, and quickly zoom in to the likely cause of the issue. When a person is developing software, if he does not have good enough mental model of the processes he is trying to control, he will almost certainly find that the project does not go as expects.

4 Conclusion

In this paper, we discussed subjectivity and complexity in the context of software development. We discussed how humans' intuitive ability to operate in the presence of subjectivity gives us a key advantage over mechanical automata such as computers. We also discussed how real-world phenomena often display a great deal of apparent complexity as well as subjectivity. Wolfram's Principle of Computational Equivalence argues that there is often no shortcut to predict the future state of a system, as you must run a computation of equivalent complexity to the system itself in order to predict the future state of the system. Just as it proves difficult to predict the future state of the world, because we have no effective means to simulate the operation of the world, it is difficult to to predict the success or failure of a project such as a software project if the project leader does not have an effective grasp on all the important variables, considerations, and plans, so that he can mentally compute all the possible outcomes.

As long as programmers make appropriate simplifying assumptions and form an effective mental model of the interacting pieces of the program, software development can proceed quickly and efficiently. The moment, however, that a programmer loses an effective grasp of how the system he is working on works, the programmer is facing the same scenario as the prognosticator – how can you predict how something will pan out when you do not have an appropriate understanding of the forces at play?

Why I prefer emacs+evil to vim

Posted: 2015-10-22. Modified: 2015-12-21. Tags: LISP, Opinion, emacs.
  1. Variable width font support.

    I studied my own reading speed on a variety of documents, and found that I read variable width text about 15% faster than fixed width text. I don't mind writing code in fixed-width fonts, but if I am going to use one text editor for everything, then I very much appreciate having variable-width support.

  2. org-mode > markdown

    Org-mode allows you to write structured content within Emacs, and supports the writer with a variety of useful features and tools. Besides the rich editing, export, and extension possibilities offered by emacs org-mode itself, I find that the org format is superior to markdown for my purposes. Two primary reasons for this are that org provides syntax (such as drawers) for defining all sorts of metadata about your text, and also that org is designed in such a way that it is basically equally usable as variable-width text and fixed-width text. In particular I dislike the extent to which markdown relies on fixed-width text for its display features.

  3. evil >= vim.

    Emacs "Evil" mode pretty much provides a superset of commonly used vim functionality. Evil supports all the commonly used vim editing commands, which allows you to take advantage of vim's ergonomic design as you edit text. Evil actually improves on some vim features – for example, search and replace shows replacements being entered as you type them. Evil also provides access to the full power of Emacs just one M-x away – you get the ergonomics of vim with the power of emacs when you want it.

  4. superior extensibility (> emacs-lisp vimscript)

    Especially for a lisp fan such as myself, Emacs Lisp seems a superior language to Vimscript. Emacs Lisp is kind of like a baby version of common lisp, and supports a rich number of features on its own and with the addition of third-party libraries.

    However the real advantage of Emacs in extensibility is the fact that the majority of emacs is actually written in Emacs Lisp. Emacs' Github mirror indicates that the ratio of elisp to C in the project is ~4:1. I believe most of the C stuff is quite low-level, and is related to multiplatform support, core rendering, and the like. On the other hand Vim's Github repo indicates that Vim's vimscript C ratio is ~0.8:1. Since the vast majority of emacs is written in emacs-lisp, in the emacs environment you can very easily dive in to functionality from within emacs to understand and/or modify how things work.

  5. "self documenting" w/ C-h f, C-h k

    In the Emacs Manual it is stated that Emacs is "extensible, customizable, self-documenting real-time display editor". One feature I really like about emacs is the "self-documenting" component of that description. Emacs makes it very easy to look up the docstring of a given function or command, easy to determine what a keyboard shortcut does, easy to determine what shortcuts are available, easy to determine what functionality is present from various modes, and more. In short, emacs makes it possible to spend a great deal of time within emacs without having to go online to look up how to use a given function or tool.

Declarative vs Programmatic UI

Posted: 2015-09-15. Modified: 2016-01-25. Tags: GUI, Opinion, programming.

There are two common ways of going about defining a graphical user interface.

  1. Declarative UI.
    1. You adapt a markup language like HTML or XML, possibly in combination with a layout language like CSS, to define the identity and basic placement of widgets and controls.
    2. You traverse your declared UI using a real programming language like Javascript in order to add functionality and advanced UI features.
    3. This approach has been adopted by frameworks such as AngularJS, and QT+QML, and is the standard approach for Android UI development.
  2. Programmatic UI.
    1. You create and lay-out the elements of the UI directly in a programming language. While you may still achieve separation of concerns by delegating UI creation and layout to a dedicated "View" object, all actions necessary to construct the UI are programmatically guided instead of declaratively specified.
    2. All older UIs used this approach, and it is commonly used today to construct UIs for a vast variety of desktop frameworks and can also be used to manipulate the web DOM.
    3. Swing, SWT, GTK, QT, is an option for Android.

Is one approach better than the other? What are the upsides and downsides of each approach?

Declarative UIs are probably at a higher level of abstraction. At the cost of a potential learning curve and flexibility, you specify what UI you want and leave it to the computer to figure out what actions to take to achieve that UI. This is in some ways similar to an oft-mentioned divide between functional programming languages and imperative languages – in a functional language, you more often define what you want to be done without going in to the details of how to do it. For a trivial, and common, example, the list manipulation functions "map," "filter," and "reduce" describe what an operation to perform without going into details of how to use accumulator variables, index variables, or loop bounds.

Abstractions are leaky, however. If a declarative UI framework is not closely designed after what you, the programmer, want to do, then you may encounter significant "impedance mismatch" and/or learning curve in making it do what you need. For example, imagine performing an arbitrary action upon a click in Swing vs AngularJS. Swing is very programmatic – you can simply register a click handler and do whatever you want to any other item on your page. In AngularJS, however, while you technically can register a click handler to do this sort of thing, it is not considerd "good practice" and the standard approach is to ensure that the view is bound to an underlying data model using "directives". In order to accomplish an arbitrary action using directives, you may have to think outside the box of simply accomplishing the action by also considering AngularJS's databinding semantics as part of your problem. There is a significant learning-curve for learning AngularJS two-way binding properly, and it is not appropriate for all web applications.

AngularJS claims on its website that it is optimized for CRUD (Create, Read, Update, Delete) websites, and even has a disclaimer that it may not work well for sites with heavily-custom DOM manipulation needs. For example, I suspect Google Docs would be very difficult to implement in idiomatic AngularJS.

In conclusion, there is no abstraction that meets all needs. While AngularJS may be great for CRUD apps (once you take significant time to understand it), it is not good for DOM-manipulation-heavy apps which do not lend themselves well to simple databinding. Data-binding declarative user interface frameworks tend to have advantages for applications which are close to the intended purpose of the framework, but if you need full customization and/or performance, you may need to specifiy not only "what" the computer needs to do but also "how" the computer should do it by using a traditional procedural user interface API.

The Case for LISP

Posted: 2015-09-15. Modified: 2016-01-25. Tags: LISP, Opinion, programming.
  1. There are multiple ways to think about problems. *
  2. Given a real-world or programming problem, often if you find the right abstraction to model it you can make the problem vastly simpler and/or make your software vastly higher quality.
  3. Therefore, solving a problem well should involve trying to find the best abstraction possible.
  4. Programming languages influence your thought processes to match what they offer.
    1. "if you see enough nails everything starts to look like a hammer".
    2. See: Sapir Whorf hypothesis and Paul Graham: Succinctness is Power.
  5. Lisp offers the flexibility to adapt to ANY paradigm, imposes the least constraints on your thinking, and therefore is likely often the best language to use to approach a problem. **

\* I hold the idea that there are multiple ways to think about problems to be fairly obviously true. But in case you don't agree yet, here is some justification. An example of a problem which can be profitably approached from multiple perspectives is "dynamic programming". I can either approach a dynamic programming problem using a top-down recursive viewpoint, or look at it from the bottom up as filling out values in a table. I can reason about DP as either constructing a DAG of subproblems or as filling out a table of precomputed values. There are MULTIPLE WAYS to think about the problem

\** One useful corollary to the idea that different paradigms are good for different applications is the common wisdom about optimal applications for functional and object-oriented programming. Functional languages are often said to be good for problems which resemble computation, with a defined input and output (like compilers). OOP languages are said to be good for problems where you model the natural world as objects. Lisp, of course, lets you choose between object-oriented and functional programming, and lets you implement entire new language structures within the language itself to extend the language to support most any paradigm you can think of.

Java vs Lisp

Posted: 2015-09-15. Modified: 2015-12-21. Tags: LISP, Java, Opinion, programming.

Java is the best for other people to understand your abstractions…

  • with the exception of poor data-structure initializers.
    • but you can get around this pretty easily by writing fclever initializers
  • Excessive OOP considered hazardous
  • Java's static typing provides an excellent source of computer-checked, always up-to-date documentation.

Lisp is the best for creating new abstractions quickly…

  • but lack of static typing means its sometimes difficult to figure out how to properly call code, etc…

Self Documenting Code

Posted: 2015-09-15. Modified: 2015-12-21. Tags: Opinion, programming.

Code should be, to a reasonable extend, self-documenting to a proficient programmer with reasonable command of the basic platform/ language. Dependency on libraries/frameworks can either help or hinder this goal.

If the usage of the library is clearly obvious from how the library is used in the program, then the program is still self-documenting.

On the other hand, if the library/framework requires extensive knowledge about it in order to understand its usage, then the code has become NOT self-documenting. If support for the library is ever dropped, you may regret your dependency on its arcane format.

On Abstraction

Posted: 2015-09-15. Modified: 2016-01-06. Tags: programming, philosophy, Opinion.

The difference between the right abstraction and the wrong abstraction is a huge difference in simplicity and clarity.

"I do believe that bulk, in programming or writing, can sometimes be an inverse measure of clarity of thought." – Leland Wilkinson, The Grammar of Graphics

bloatware reflects design-by-committee and sloppy thinking.

propositional logic vs predicate calculus

A programming problem can be described as a set of


and requirements.

Every programming problem has an infinite number of solutions

no-free-lunch theorem

Atom Feeds: