Devel::MAT::Dumper independent distribution

I've just released a new version of Devel-MAT, now with the heap dumping part split out into its own distribution, Devel-MAT-Dumper.

There's two reasons for this. Firstly, by being split into its own distribution, the heap-dumper part is smaller and lighter to install on places like production servers as it needs no non-core dependencies. Since the heapdump files still need analysis somewhere, you'll still have to install the full Devel-MAT distribution on a machine - such as a workstation or laptop - but you no longer need the analysis tools on every target machine.

Secondly, since the two parts are now in independent distributions, they can have different Perl version requirements. I want to keep the ability to analyse Perl programs running on versions from 5.10 onwards (and maybe sometime see if I can push that back to 5.8), but now the distributions are split I have begun to make use of 5.14 features in some of the analysis tools. This means that you can still analyse programs running on 5.10, but you will need an installation of 5.14 or later to run the tooling.

Looking a little further afield, I plan to continue writing more tooling, perhaps published as independent distributions in places too. It's still the case that almost every time I run into some situation that needs memory analysis like this, I end up writing more tools to help, so the flow of additional tooling will continue for a while yet.

I'm also keen to hear success stories from others (or even failure ones - what couldn't you work out and what might I do to help?) to help guide the creation of more tooling. I'm available to consult with people to help you with your own problems also - send me a note to

leonerd [at] leonerd [dot] org [dot] uk

and we can discuss your requirements.


Async/await in Perl - control flow for asynchrony

I've decided that my Future::AsyncAwait CPAN module is sufficiently non-alpha that I've started migrating a few of my less critical code into using it. I thought I'd pick a few of the Device::Chip drivers for various kinds of chip, because they're unlikely to be particularly involved in anyone's real deployment code, as really I only wrote those to test out some ideas on the chips before writing microcontroller code in C for them. These seemed like good candidates to begin with.

Here's an example of a function in the previous version, using Futures directly. The code had lots of syntactical noise, some ->then chaining and the Future::Utils::repeat loop not looking like a regular foreach loop. You can just-about read what's going on but it's clear there's a lot of machinery noise getting in the way of really understanding the code.

By rewriting all the logic using await expressions inside an async sub we arrive at a version that much closer resembles the sort of thing you'd write in straight-line synchronous code. In reading it you can just skim over the awaits while looking at it and read it like synchronous code.

A question you might begin to ask at this point is why I'd choose to implement this particular set of syntax or semantics, of the various possibilities for how to manage asynchronous control flow. Aside from its general neatness and applicability to Futures (which I've already worked with at length), there's one key reason: The async/await syntax here is the exact same thing as implemented in Python 3, ES6, C#5, Dart, even Rust is currently considering adopting it Yes, it's nice to have a good concurrency model built into the language, but it's considerably stronger if it's the same as the consensus among a variety of other languages too.

Some language references for them:

Python Tasks and coroutines
JavaScript async function
C# Asynchronous Programming
Dart Dart Language Asynchrony Support

If the four quite semantically-different languages of Python, JavaScript, C# and Dart can all come to the same idea, then maybe it has merit. I honestly think that given a few years, async/await could become as ubiquitous as if or while loops, to the level of "well obviously our language has that". This is why I wanted to steal it into Perl. In ten years time it might look as foolish for a language not to have an async/await construct, as it does today for it not to have a try/catch or a switch.

Ah... more on that subject another day perhaps ;)


Developing against $HOME/lib libraries and LD_LIBRARY_PATH

I've come up against an awkward situation using libtool, and I wonder if I could ask for some advice on what I'm probably doing wrong here.

I'm developing two different C libraries; lets just call them A and B. A is fairly standalone, and B depends on A.

Using libtool, I can develop on A fine. In particular, it has some internal unit-tests which I run using libtool --mode=exec. These run OK.

Using libtool, I can develop on B just fine, provided the A library is installed as a real system package. The linker can find it by the normal mechanisms and it lives in /usr/lib along with other system things. All is good. Library B also has some unit-tests that are executed with libtool --mode=exec. So far so good.

But now suppose I have a possible change I want to make that needs edits in both libraries. I don't want to build a new system package yet and upset all the other users on this box. That's fine, I can just install library A into my $HOME directory using

$ make install PREFIX=$HOME

In order to be able to test programs that use library A, I'm going to need to set LD_LIBRARY_PATH now so they can find the library here in preference to the system-installed one, so in my .profile I use


But now, that upsets every use of libtool --exec to run any of the unit tests for my libraries in their source code (for both libraries A and B). With LD_LIBRARY_PATH set in the environment, the libtool wrapper script no longer sets it up at all, meaning that libtool --exec on a unit test just sees the system-packaged versions of each library. This means I can't test the new code I just write in library A, because the unit-test executables don't see it.

I could make unit tests for library A itself run fine by entirely defeating the LD_LIBRARY_PATH mechanism again, if I

$ LD_LIBRARY_PATH= make test

But what of library B? It has to be able to see its own locally-build library in $CWD/.libs and library A in $HOME/lib. So I could

$ LD_LIBRARY_PATH=`pwd`/.libs:$LD_LIBRARY_PATH make test

But at that point I really feel like I am fighting against libtool, rather than it helping me do this right.

So, what am I doing wrong here?

Alternatively, am I using this entire setup in its intended way, but libtool just isn't playing ball with me?


More Perl memory leaks with Devel::MAT - another use-case story

I've written a second post about Devel::MAT on the company blog, following up my first one.

Again, rather than copy the entire thing I'll just link to it here instead:

Tracing Perl memory leaks with Devel::MAT, part 2

As a little preview, I'll add that this one has screenshots:


Tracing Perl memory leaks with Devel::MAT - a use-case story

I've lately been working a lot on the analysis tools around Devel::MAT, the toolset used for analysing memory leaks, unbounded growth, and other memory-related issues affecting Perl programs.

I wrote a blog post about it on the company blog. Rather than copy the entire thing I'll just link to it here instead:

Tracing Perl memory leaks with Devel::MAT


Perl Parser Plugins 3 - Optrees

<< First | < Prev | Next >

So far we've seen how to interact with the perl parser to introduce new keywords. We've seen how we can allow that keyword to be enabled or disabled in lexical scopes. But our newly-introduced syntax still doesn't actually do anything yet. Today lets change that, and actually provide some new syntax which really does something.


To understand the operation of any parser plugin (or at least, one that actually does anything), we first have to understand some more internals of how perl works; a little of how the parser interprets source code, and some detail about how the runtime actually works. I won't go into a lot of detail in this post, only as much as needed for this next example. I'll expand a lot more on it in later posts.

Every piece of code in a perl program (i.e. the body of every named and anonymous function, and the top-level code in every file) is represented by an optree; a tree-shaped structure of individual nodes called ops. The structure of this optree broadly relates to the syntactic nature of the code it was compiled from - it is the parser's job to take the textual form of the program and generate these trees. Each op in the tree has an overall type which determines its runtime behaviour, and may have additional arguments, flags that alter its behaviour, and child ops that relate to it. The particular fields relating to each op depend on the type of that op.

To execute the code in one of these optrees the interpreter walks the tree structure, invoking built-in functions determined by the type of each op in the tree. These functions implement the behaviour of the optree by having side-effects on the interpreter state, which may include global variables, the symbol table, or the state of the temporary value stack.

For example, let us consider the following arithmetic expression:

(1 + 2) * 3

This expression involves an addition, a multiplication, and three constant values. To express this expression as an optree requires three kinds of ops - a OP_ADD op represents the addition, a OP_MULT the multiplication, and each constant is represented by its own OP_CONST. These are arranged in a tree structure, with the OP_MULT at the toplevel whose children are the OP_ADD and one of the OP_CONSTs, the OP_ADD having the other two OP_CONSTs. The tree structure looks something like:

  +-- OP_ADD
  |     +-- OP_CONST (IV=1)
  |     +-- OP_CONST (IV=2)
  +-- OP_CONST (IV=3)
Side note: it is unlikely that a real program would ever actually contain an optree like this one, because the compiler will fold the constants out into a single constant value. But this will serve fine as a simple example to demonstrate how it works.

You may recall from the previous post that we implemented a keyword plugin that simply created a new OP_NULL optree; i.e. an optree that doesn't do anything. If we now change this to construct an OP_CONST we can build a keyword that behaves like a symbolic constant; placing it into an expression will yield the value of that constant. This returned op will then be inserted into the optree of the function containing the syntax that invoked our plugin, to be executed at this point in the tree when that function is run.

To start with, we'll adjust the main plugin hook function to recognise a new keyword; this time tau:

static int MY_keyword_plugin(pTHX_ char *kw, STRLEN kwlen,
    OP **op_ptr)
  HV *hints = GvHV(PL_hintgv);
  if(kwlen == 3 && strEQ(kw, "tau") &&
     hints && hv_fetchs(hints, "tmp/tau", 0))
    return tau_keyword(op_ptr);

  return (*next_keyword_plugin)(aTHX_ kw, kwlen, op_ptr);

Now we can hook this up to a new keyword implementation function that constructs an optree with a OP_CONST set to the required value, and tells the parser that it behaves like an expression:

#include <math.h>

static int tau_keyword(OP **op_ptr)
  *op_ptr = newSVOP(OP_CONST, 0, newSVnv(2 * M_PI));

We can now use this new keyword in an expression as if it was a regular constant:

$ perl -E 'use tmp; say "Tau is ", tau'
Tau is 6.28318530717959

Of course, so far we could have done this just as easily with a normal constant, such as one provided by use constant. However, since this is now implemented by a keyword plugin, it can do many exciting things not available to normal perl code. In the next part we'll explore this further.

<< First | < Prev | Next >


Perl Parser Plugins 2 - Lexical Hints

<< First | < Prev | Next >

In the previous post we saw the introduction to how the perl parser engine can be extended, letting us hook a new function in that gets invoked whenever perl sees something that might be a keyword. The code in the previous post didn't actually add any new functionality. Today we're going to take a look at how new things can actually be added.

A Trivial Keyword

Possibly the simplest form of keyword plugin is one that doesn't actually do anything, other than consume the keyword that controls it. For example, lets consider the following program, in which we've introduced the please keyword. It doesn't actually have any effect on the runtime behaviour of the program, but it lets us be a little more polite in our request to the interpreter.

use tmp;
use feature 'say';

please say "Hello, world!"

To implement this plugin, we start by writing a keyword plugin hook function that recognises the please keyword, and invokes its custom handling function if it's found. It's purely a matter of style, but I like to write this as a small function for the plugin hook itself that simply recognises the keyword. If it finds the keyword it invokes a different function to actually implement the behaviour. This helps keep the code nice and neat.

static int (*next_keyword_plugin)(pTHX_ char *, STRLEN, OP **);

static int MY_keyword_plugin(pTHX_ char *kw, STRLEN kwlen,
    OP **op_ptr)
  if(kwlen == 6 && strEQ(kw, "please"))
    return please_keyword(op_ptr);

  return (*next_keyword_plugin)(aTHX_ kw, kwlen, op_ptr);

MODULE = tmp  PACKAGE = tmp

  next_keyword_plugin = PL_keyword_plugin;
  PL_keyword_plugin = &MY_keyword_plugin;

Next, we need to provide this please_keyword function that implements our required behaviour.

static int please_keyword(OP **op_ptr)
  *op_ptr = newOP(OP_NULL, 0);

The last line of this function, the return statement, tells the perl parser that this plugin has consumed the keyword, and that the resulting syntactic structure should be considered as a statement. The effect of this resets the parser into wanting to find the start of a new statement following it, which lets it then see the say call as normal.

As mentioned in the previous post, a parser plugin provides new behaviour into the parse tree by using the op_ptr double-pointer argument, to give a new optree back to the parser to represent the code the plugin just parsed. Since our plugin doesn't actually do anything, we don't really need to construct an optree. But since there is no way to tell perl "nothing", instead we have to build a single op which does nothing; this is OP_NULL. Don't worry too much about this line of code; consider it simply as part of the required boilerplate here, and I'll expand more on the subject in a later post.

Lexical Hints

Implemented as it stands, there's one large problem with this syntax plugin. You may recall from part 1 that the plugin hook chain is global to the entire parser, and is invoked whenever code is found anywhere. This means that other code far away from our plugin will also get disturbed. For example, if the following code appears elsewhere when our module is loaded:

sub please { print @_; }

print "Before\n";
please "It works\n";
print "After\n";

Useless use of a constant ("It works\n") in void context at - line 4.

This happens because our plugin has no knowledge of the lexical scope of the keywords; it will happily consume the keyword wherever it appears in any file, in any scope. This doesn't follow the usual way that perl code is parsed; ideally we would like our keywords to be enabled or disabled lexically. The lexical hints hash, %^H, can be used to provide this lexical scoping. We can make use of this by setting a lexical hint in this hash when the module is loaded, and having the XS code look for that hint to control whether it consumes the keyword.

We start by adding a import function to the perl module that loads the XS code, which sets this hint. There's a strong convention among CPAN modules, which must all share this one hash, about how to name keys within it. They are named using the controlling module's name, followed by a / symbol.

package tmp;

require XSLoader;
XSLoader::load( __PACKAGE__ );

sub import

The perl interpreter will now assist us with maintaining the value of %^H, ensuring that this key only has a true value during lexical scopes which have used our module. We can now extend the test function in the XS code to check for this condition. In XS code, the hints hash is accessible as the GvHV of PL_hintgv:

static int MY_keyword_plugin(pTHX_ char *kw, STRLEN kwlen,
    OP **op_ptr)
  HV *hints = GvHV(PL_hintgv);
  if(kwlen == 6 && strEQ(kw, "please") &&
     hints && hv_fetchs(hints, "tmp/please", 0))
    return please_keyword(op_ptr);

  return (*next_keyword_plugin)(aTHX_ kw, kwlen, op_ptr);

We now have a well-behaved syntax plugin which is properly aware of lexical scope. It only responds to its keyword in scopes where use tmp is in effect:

$ perl -wE '{use tmp; sub please { say @_ }; please "hello"}'
Useless use of a constant ("hello") in void context at -e line 1.

$ perl -wE '{use tmp;} sub please { say @_ }; please "hello"'

Of course, we still haven't actually added any real behaviour yet but we're at least all set for having a lexically-scoped keyword we use to add that. We'll see how we can start to introduce new behaviour to the perl interpreter in part 3.

<< First | < Prev | Next >