2021/07/30

Perl UV binding hits version 2.000

Over the past few months I've been working on finishing off the libuv Perl binding module, UV. Yesterday I finally got it finished enough to feel like calling it version 2.000. Now's a good time to take a look at it.

libuv itself is a cross-platform event handling library, which focuses on providing nicely portable abstractions for things like TCP sockets, timers, and sub-process management between UNIX, Windows and other platforms. Traditionally things like event-based socket handling have always been difficult to write in a portable way between Windows and other places due to the very different ways things work on Windows as opposed to anywhere else. libuv provides a large number of helpful wrappers to write event-based code in a portable way, freeing the developer from having to care about these things.

A number of languages have nice bindings for libuv, but until recently there wasn't a good one for Perl. My latest project for The Perl Foundation aimed to fix this. The latest release of UV version 2.000 indicates that this is now done.

It's unlikely that most programs would choose to operate directly with UV itself, but rather via some higher-level event system. There are UV adapter modules for IO::Async (IO::Async::Loop::UV), Mojo (Mojo::Reactor::UV), and Future::IO (Future::IO::Impl::UV) at least.

The UV module certainly wraps much of what libuv has to offer, but there are still some parts missing. libuv can watch filesystems for changes of files, and provides asynchronous filesystem access access functions - both of these are currently missing from the Perl binding. Threadpools are an entire concept that doesn't map very well to the Perl language, so they are absent too. Finally, libuv lists an entire category of "miscellaneous functions", most of which are already available independently in Perl, so there seems little point to wrapping those provided by libuv.

Finally, we should take note of one thing that doesn't work - the UV::TCP->open and UV::UDP->open functions when running on Windows. The upshot here is that you cannot create TCP or UDP sockets in your application independently of libuv and then hand them over to be handled by the library; this is not permitted. This is because on Windows, there are fundamentally two different kinds of sockets that require two different sets of API to access them - ones using WSA_FLAG_OVERLAPPED, and ones not. libuv needs that flag in order to perform event-based IO on sockets, and so it won't work with sockets created without it - which is the usual kind that most other modules, and perl itself, will create. This means that on Windows, the only sockets you can use with the UV module are ones created by UV itself - such as by asking it to connect out to servers, or listen and accept incoming connections. Fortunately, this is sufficient for the vast majority of applications.

I would like to finish up by saying thanks to The Perl Foundation for funding me to complete this project.

2021/02/26

Writing a Perl Core Feature - part 11: Core modules

Index | < Prev

Our new feature is now implemented, tested, and documented. There's just one last thing we need to do - update the bundled modules that come with core. Specifically, because we've added some new syntax, we need to update B::Deparse to be able to deparse it.

When the isa operator was added, the deparse module needed to be informed about the new OP_ISA opcode, in this small addition: (github.com/Perl/perl5).

--- a/lib/B/Deparse.pm
+++ b/lib/B/Deparse.pm
@@ -52,7 +52,7 @@ use B qw(class main_root main_start main_cv svref_2object opnumber perlstring
         MDEREF_SHIFT
     );
 
-$VERSION = '1.51';
+$VERSION = '1.52';
 use strict;
 our $AUTOLOAD;
 use warnings ();
@@ -3060,6 +3060,8 @@ sub pp_sge { binop(@_, "ge", 15) }
 sub pp_sle { binop(@_, "le", 15) }
 sub pp_scmp { maybe_targmy(@_, \&binop, "cmp", 14) }
 
+sub pp_isa { binop(@_, "isa", 15) }
+
 sub pp_sassign { binop(@_, "=", 7, SWAP_CHILDREN) }
 sub pp_aassign { binop(@_, "=", 7, SWAP_CHILDREN | LIST_CONTEXT) }

As you can see it's quite a small addition here; we just need to add a new method to the main B::Deparse package named after the new opcode. This new method calls down to the common binop function which is shared by the various binary operators, and recurses down parts of the optree, returning a combined result using the "isa" string in between the two parts.

A more complex addition was made with the try syntax, as can be seen at (github.com/Perl/perl5); abbreviated here:

+sub pp_leavetrycatch {
+    my $self = shift;
+    my ($op) = @_;
...
+    my $trycode = scopeop(0, $self, $tryblock);
+    my $catchvar = $self->padname($catch->targ);
+    my $catchcode = scopeop(0, $self, $catchblock);
+
+    return "try {\n\t$trycode\n\b}\n" .
+           "catch($catchvar) {\n\t$catchcode\n\b}\cK";
+}

As before, this adds a new method named after the new opcode (in the case of the try/catch syntax this is named OP_LEAVETRYCATCH). The body of this method too just recurses down to parts of the sub-tree it was passed; in this case being two scope ops for the bodies of the blocks, plus a lexical variable name for the catch variable. The method then again returns a new string combining the various parts together along with the required braces, linefeeds, and indentation hints.

We can tell we need to add this for our new banana feature, as currently this does not deparse properly:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -Mexperimental=banana -MO=Deparse -ce 'print ban "Hello, world" ana;'
unexpected OP_BANANA at lib/B/Deparse.pm line 1664.
BEGIN {${^WARNING_BITS} = "\x10\x01\x00\x00\x00\x50\x04\x00\x00\x00\x00\x00\x00\x55\x51\x55\x50\x51\x45\x00"}
use feature 'banana';
print XXX;
-e syntax OK

We'll fix this by adding a new pp_banana in an appropriate place, perhaps just after the ones for lc/uc/fc. Don't forget to bump the $VERSION number too:

leo@shy:~/src/bleadperl/perl [git]
$ nvim lib/B/Deparse.pm 

leo@shy:~/src/bleadperl/perl [git]
$ git diff 
diff --git a/lib/B/Deparse.pm b/lib/B/Deparse.pm
index 67147f12dd..f6039a435d 100644
--- a/lib/B/Deparse.pm
+++ b/lib/B/Deparse.pm
@@ -52,7 +52,7 @@ use B qw(class main_root main_start main_cv svref_2object opnumber perlstring
         MDEREF_SHIFT
     );
 
-$VERSION = '1.56';
+$VERSION = '1.57';
 use strict;
 our $AUTOLOAD;
 use warnings ();
@@ -2824,6 +2824,13 @@ sub pp_lc { dq_unop(@_, "lc") }
 sub pp_quotemeta { maybe_targmy(@_, \&dq_unop, "quotemeta") }
 sub pp_fc { dq_unop(@_, "fc") }
 
+sub pp_banana {
+    my $self = shift;
+    my ($op, $cx) = @_;
+    my $kid = $op->first;
+    return "ban " . $self->deparse($kid, 1) . " ana";
+}
+
 sub loopex {
     my $self = shift;
     my ($op, $cx, $name) = @_;

This new function recurses down to deparse for the subtree, and returns a new string wrapped in the appropriate syntax for it. That should be all we need:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -Mexperimental=banana -MO=Deparse -ce 'print ban "Hello, world" ana;'
BEGIN {${^WARNING_BITS} = "\x10\x01\x00\x00\x00\x50\x04\x00\x00\x00\x00\x00\x00\x55\x51\x55\x50\x51\x45\x00"}
use feature 'banana';
print ban 'Hello, world' ana;
-e syntax OK

Of course, this being a perl module we should remember to update its unit tests.

leo@shy:~/src/bleadperl/perl [git]
$ git diff lib/B/Deparse.t
diff --git a/lib/B/Deparse.t b/lib/B/Deparse.t
index 24eb445041..0fe6940cb3 100644
--- a/lib/B/Deparse.t
+++ b/lib/B/Deparse.t
@@ -3171,3 +3171,10 @@ try {
 catch($var) {
     SECOND();
 }
+####
+# banana
+# CONTEXT use feature 'banana'; no warnings 'experimental::banana';
+ban 'literal' ana;
+ban $a ana;
+ban $a . $b ana;
+ban "stringify $a" ana;

leo@shy:~/src/bleadperl/perl [git]
$ ./perl t/harness lib/B/Deparse.t 
../lib/B/Deparse.t .. ok     
All tests successful.
Files=1, Tests=321,  9 wallclock secs ( 0.14 usr  0.00 sys +  8.99 cusr  0.38 csys =  9.51 CPU)
Result: PASS

Because in part 10 we added documentation for a new function in pod/perlfunc.pod there's another test that needs updating:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl t/harness ext/Pod-Functions/t/Functions.t 
../ext/Pod-Functions/t/Functions.t .. 1/? 
#   Failed test 'run as plain program'
#   at t/Functions.t line 55.
#          got: '
...
Result: FAIL

We can fix that by adding the new function to the expected list in the test file itself:

leo@shy:~/src/bleadperl/perl [git]
$ nvim ext/Pod-Functions/t/Functions.t

leo@shy:~/src/bleadperl/perl [git]
$ git diff ext/Pod-Functions/t/Functions.t
diff --git a/ext/Pod-Functions/t/Functions.t b/ext/Pod-Functions/t/Functions.t
index 2beccc1ac6..4d5b03e978 100644
--- a/ext/Pod-Functions/t/Functions.t
+++ b/ext/Pod-Functions/t/Functions.t
@@ -76,7 +76,7 @@ Functions.t - Test Pod::Functions
 __DATA__
 
 Functions for SCALARs or strings:
-     chomp, chop, chr, crypt, fc, hex, index, lc, lcfirst,
+     ban, chomp, chop, chr, crypt, fc, hex, index, lc, lcfirst,
      length, oct, ord, pack, q/STRING/, qq/STRING/, reverse,
      rindex, sprintf, substr, tr///, uc, ucfirst, y///
 
leo@shy:~/src/bleadperl/perl [git]
$ ./perl t/harness ext/Pod-Functions/t/Functions.t 
../ext/Pod-Functions/t/Functions.t .. ok     
All tests successful.
Files=1, Tests=234,  1 wallclock secs ( 0.04 usr  0.01 sys +  0.23 cusr  0.00 csys =  0.28 CPU)
Result: PASS

At this point, we're done. We've now completed all the steps to add a new feature to the perl interpreter. As well as all the steps required to actually implement it in the core binary itself, we've updated the tests, documentation, and support modules to match.

Along the way we've seen examples from real commits into the perl tree while we made our own. Any particular design of new feature will of course have its own variations and differences - there's still many parts of the interpreter we haven't touched on in this series. It would be difficult to try to cover all the possible ideas of things that could be added or changed, but hopefully having completed this series you'll at least have a good overview of the main pieces that are likely to be involved, and have some starting-off points to explore further to see whatever additional details might be required for whatever situation you encounter.

Index | < Prev

2021/02/24

Writing a Perl Core Feature - part 10: Documentation

Index | < Prev | Next >

Now that have our new feature nicely implemented and tested, we're nearly finished. We just have a few more loose ends to tidy up. The first of these is to take a look at some documentation.

We've already done one small documentation addition to perldiag.pod when we added the new warning message, but the bulk of documentation to explain a new feature would likely be found in one of the main documents - perlsyn.pod, perlop.pod, perlfunc.pod or similar. Exactly which of these is best would depend on the nature of the specific feature.

The isa feature, being a new infix operator, was documented in perlop.pod: (github.com/Perl/perl5).

...
+=head2 Class Instance Operator
+X<isa operator>
+
+Binary C<isa> evaluates to true when left argument is an object instance of
+the class (or a subclass derived from that class) given by the right argument.
+If the left argument is not defined, not a blessed object instance, or does
+not derive from the class given by the right argument, the operator evaluates
+as false. The right argument may give the class either as a barename or a
+scalar expression that yields a string class name:
+
+    if( $obj isa Some::Class ) { ... }
+
+    if( $obj isa "Different::Class" ) { ... }
+    if( $obj isa $name_of_class ) { ... }
+
+This is an experimental feature and is available from Perl 5.31.6 when enabled
+by C<use feature 'isa'>. It emits a warning in the C<experimental::isa>
+category.

Lets now write a little bit of documentation for our new banana feature. Since it is a named function-like operator (though with odd syntax involving a second trailing named keyword), perhaps we'll write it in perlfunc.pod. We'll style it similarly to the case-changing functions lc and uc to get some suggested wording.

leo@shy:~/src/bleadperl/perl [git]
$ nvim pod/perlfunc.pod 

leo@shy (1 job):~/src/bleadperl/perl [git]
$ git diff | xml_escape 
diff --git a/pod/perlfunc.pod b/pod/perlfunc.pod
index b655a08ecc..319e9aab96 100644
--- a/pod/perlfunc.pod
+++ b/pod/perlfunc.pod
@@ -114,6 +114,7 @@ X<scalar> X<string> X<character>
 
 =for Pod::Functions =String
 
+L<C<ban>|/ban EXPR ana>,
 L<C<chomp>|/chomp VARIABLE>, L<C<chop>|/chop VARIABLE>,
 L<C<chr>|/chr NUMBER>, L<C<crypt>|/crypt PLAINTEXT,SALT>,
 L<C<fc>|/fc EXPR>, L<C<hex>|/hex EXPR>,
@@ -136,6 +137,10 @@ prefixed with C<CORE::>.  The
 L<C<"fc"> feature|feature/The 'fc' feature> is enabled automatically
 with a C<use v5.16> (or higher) declaration in the current scope.
 
+L<C<ban>|/ban EXPR ana> is available only if the
+L<C<"banana"> feature|feature/The 'banana' feature.> is enabled or if it is
+prefixed with C<CORE::>.
+
 =item Regular expressions and pattern matching
 X<regular expression> X<regex> X<regexp>
 
@@ -773,6 +778,15 @@ your L<atan2(3)> manpage for more information.
 
 Portability issues: L<perlport/atan2>.
 
+=item ban EXPR ana
+X<ban>
+
+=for Pod::Functions return ROT13 transformed version of a string
+
+Applies the "ROT13" transform to upper- and lower-case letters in the given
+expression string, returning the newly-formed string. Non-letter characters
+are left unchanged.
+
 =item bind SOCKET,NAME
 X<bind>

While this will do as a short example here, any real feature would likely have a lot more words to say than just this.

When editing POD files it's good to get into the habit of running the porting tests (or at least the POD checking ones) before committing, to check the formatting is valid:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl t/harness t/porting/pod*.t
porting/podcheck.t ... ok         
porting/pod_rules.t .. ok   
All tests successful.
Files=2, Tests=1472, 34 wallclock secs ( 0.20 usr  0.00 sys + 33.79 cusr  0.15 csys = 34.14 CPU)
Result: PASS

While I was writing this documentation it occurred to me to write about how the function handles Unicode characters vs byte strings, so I was thinking more about how it actually does. It turns out the implementation doesn't work properly for that, as we can demonstrate with a new test:

--- a/t/op/banana.t
+++ b/t/op/banana.t
@@ -11,7 +11,7 @@ use strict;
 use feature 'banana';
 no warnings 'experimental::banana';
 
-plan 7;
+plan 8;
 
 is(ban "ABCD" ana, "NOPQ", 'Uppercase ROT13');
 is(ban "abcd" ana, "nopq", 'Lowercase ROT13');
@@ -23,3 +23,8 @@ my $str = "efgh";
 is(ban $str ana, "rstu", 'Lexical variable');
 is(ban $str . "IJK" ana, "rstuVWX", 'Concat expression');
 is("(" . ban "LMNO" ana . ")", "(YZAB)", 'Outer concat');
+
+{
+    use utf8;
+    is(ban "café" ana, "pnsé", 'Unicode string');
+}

leo@shy:~/src/bleadperl/perl [git]
$ ./perl t/harness t/op/banana.t 
op/banana.t .. 1/8 # Failed test 8 - Unicode string at op/banana.t line 29
#      got "pnsé"
# expected "pns�"
op/banana.t .. Failed 1/8 subtests 

This comes down to a bug in the pp_banana opcode function, which used the internal byte buffer of the incoming SV (SvPV) without inspecting the corresponding SvUTF8 flag. Such a pattern is always indicative of a Unicode support bug. We can easily fix this:

leo@shy:~/src/bleadperl/perl [git]
$ git diff pp.c
diff --git a/pp.c b/pp.c
index 9725806b84..3dbe21fadd 100644
--- a/pp.c
+++ b/pp.c
@@ -7211,6 +7211,8 @@ PP(pp_banana)
     s = SvPV(arg, len);
 
     mPUSHs(newSVpvn_rot13(s, len));
+    if(SvUTF8(arg))
+        SvUTF8_on(TOPs);
     RETURN;
 }
 

leo@shy:~/src/bleadperl/perl [git]
$ ./perl t/harness t/op/banana.t 
op/banana.t .. ok   
All tests successful.
Files=1, Tests=8,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.02 cusr  0.00 csys =  0.04 CPU)
Result: PASS

Writing good documentation is an integral part of the process of developing a new feature. Firstly it helps to explain the feature to users so they know how to use it. But often you find that the process of writing the words helps you think about different aspects of that feature that you may not have considered before. With that new frame of mind you sometimes discover missing parts to it, or uncover bugs or cornercases that need fixing. Make sure to spend time working on the documentation for any new feature - it is said that you never truely understand something until you try teach it to someone else.

Index | < Prev | Next >

2021/02/22

Writing a Perl Core Feature - part 9: Tests

Index | < Prev | Next >

By the end of part 8 we finally managed to see an actual implementation of our new feature. We tested a couple of things on the commandline directly to see that it seems to be doing the right thing. For a real core feature though it would be better to have it tested in a more automated, repeatable fashion. This is what the core unit tests are for.

The core perl source distribution contains a t/ directory with unit test files, very similar to the structure used by regular CPAN modules. The process for running these is a little different; as we already saw back in part 3 they need to be invoked by t/harness. The files themselves are somewhat more limited in what other modules they can use, so the full suite of Test:: modules are unavailable. But still they are expected to emit the regular TAP output we've come to expect from Perl unit tests, and tend to be structured quite similarly inside.

For example, the isa feature added an entire new file for its unit tests. As they all relate to the new syntax and semantics around a new opcode, they go in a file under the t/op directory. I won't paste the entire t/op/isa.t file, but consider this small section: (github.com/Perl/perl5):

#!./perl

BEGIN {
    chdir 't' if -d 't';
    require './test.pl';
    set_up_inc('../lib');
    require Config;
}

use strict;
use feature 'isa';
no warnings 'experimental::isa';

...

my $baseobj = bless {}, "BaseClass";

# Bareword package name
ok($baseobj isa BaseClass, '$baseobj isa BaseClass');
ok(not($baseobj isa Another::Class), '$baseobj is not Another::Class');

While it doesn't use Test::More, it does still have access to some similar testing functions such as the ok test. The initial lines of boilerplate in the BEGIN block set up the testing functions from the test.pl script, so we can use them in the actual tests.

Lets now have a go at writing some tests for our new banana feature. As it works like a text transformation function we can imagine a few different test strings to throw at it.

leo@shy:~/src/bleadperl/perl [git]
$ nvim t/op/banana.t

leo@shy:~/src/bleadperl/perl [git]
$ cat t/op/banana.t
#!./perl

BEGIN {
    chdir 't' if -d 't';
    require './test.pl';
    set_up_inc('../lib');
    require Config;
}

use strict;
use feature 'banana';
no warnings 'experimental::banana';

plan 7;

is(ban "ABCD" ana, "NOPQ", 'Uppercase ROT13');
is(ban "abcd" ana, "nopq", 'Lowercase ROT13');
is(ban "1234" ana, "1234", 'Numbers unaffected');

is(ban "a! b! c!" ana, "n! o! p!", 'Whitespace and symbols intermingled');

my $str = "efgh";
is(ban $str ana, "rstu", 'Lexical variable');

is(ban $str . "IJK" ana, "rstuVWX", 'Concat expression');
is("(" . ban "LMNO" ana . ")", "(YZAB)", 'Outer concat');

$ ./perl t/harness t/op/banana.t
op/banana.t .. ok   
All tests successful.
Files=1, Tests=4,  1 wallclock secs ( 0.02 usr  0.00 sys +  0.03 cusr  0.00 csys =  0.05 CPU)
Result: PASS

Here we have used the is() testing function to test that various strings that we got the ban ... ana operator to generate are what we expected them to be. We've tested both uppercase and lowercase letters, and that non-letter characters such as numbers, symbols and spaces remain unaffected. In addition we've added some syntax tests as well, to check variables as well as literal string constants, and to demonstrate that the parser works correctly on the precedence of the operator mixed with string concatenation. All appears to be working fine.

Before we commit this one there is one last thing we have to do. Having added a new file to the distribution, one of the porting tests will now be unhappy:

leo@shy:~/src/bleadperl/perl [git]
$ git add t/op/banana.t 

leo@shy:~/src/bleadperl/perl [git]
$ make test_porting
...
porting/manifest.t ........ 9848/? # Failed test 10502 - git ls-files
gives the same number of files as MANIFEST lists at porting/manifest.t line 101
#      got "6304"
# expected "6303"
# Failed test 10504 - Nothing added to the repo that isn't in MANIFEST
at porting/manifest.t line 113
#      got "1"
# expected "0"
# Failed test 10505 - Nothing added to the repo that isn't in MANIFEST
at porting/manifest.t line 114
#      got "not in MANIFEST: t/op/banana.t"
# expected "not in MANIFEST: "
porting/manifest.t ........ Failed 3/10507 subtests 

To fix this one we need to manually add an entry in the MANIFEST file; unlike as is common practice for CPAN modules, this file is not automatically generated.

leo@shy:~/src/bleadperl/perl [git]
$ nvim MANIFEST

leo@shy:~/src/bleadperl/perl [git]
$ git diff MANIFEST
diff --git a/MANIFEST b/MANIFEST
index 71d3b453da..03ecdda3d2 100644
--- a/MANIFEST
+++ b/MANIFEST
@@ -5779,6 +5779,7 @@ t/op/attrproto.t          See if the prototype attribute works
 t/op/attrs.t                   See if attributes on declarations work
 t/op/auto.t                    See if autoincrement et all work
 t/op/avhv.t                    See if pseudo-hashes work
+t/op/banana.t                  See if the ban ... ana syntax works
 t/op/bless.t                   See if bless works
 t/op/blocks.t                  See if BEGIN and friends work
 t/op/bop.t                     See if bitops work

leo@shy:~/src/bleadperl/perl [git]
$ make test_porting
...
Result: PASS

Of course, in this test file we've added only 7 tests. It is likely that any actual real feature would have a lot more testing around it, to deal with a wider variety of situations and corner-cases. It's often that the really interesting cases only come to light after trying to use it for real and finding odd situations that don't quite work as expected; so after adding a new feature expect to spend a while expanding the test file to cover more things. It's especially useful to add new tests of new situations you find yourself using the feature in, even if they currently work just fine. The presence of such tests helps ensure the feature remains working in that manner.

Index | < Prev | Next >

2021/02/19

Writing a Perl Core Feature - part 8: Interpreter internals

Index | < Prev | Next >

At this point we are most of the way to adding a new feature to the Perl interpreter. In part 4 we created an opcode function to represent the new behaviour, part 5 and part 6 added compiler support to recognise the syntax used to represent it, and in part 7 we made a helper function to provide the required behaviour. It's now time to tie them all together.

When we looked at opcodes and optrees back in part 4, I mentioned that each node of the optree performs a little part of the execution of a function, with child nodes usually obtaining some piece of data somewhere that gets passed up to parent nodes to operate on. I skipped over exactly how that all works, so for this part lets look at that in more detail.

The data model used by the perl interpreter for runtime execution of code is based around being a stack machine. Most opcodes that operate in some way on regular perl data values do so by interacting with the data stack (often simply called "the stack"; though this is sometimes ambiguous as there are in fact several stacks within the perl interpreter). As the interpreter walks along an optree invoking the function associated with each opcode, these various functions either push values onto the stack, or pop values already there back off it again, in order to use them.

For example, in part 4 we saw how the line of code my $x = 5; might get represented by an optree of three nodes - an OP_SASSIGN with two child nodes OP_CONST and OP_PADSV.

When this statement is executed the optree nodes are visited in postfix order, with the two child BASEOPs running first in order to push some values to the stack, followed by the assignment BINOP afterwards, which takes those values back off the stack and performs the appropriate assignment.

Lets now take a closer look at the code inside one of the actual functions which implements this. For example, pp_const, the function for OP_CONST consists of three short lines:

PP(pp_const)
{
    dSP;
    XPUSHs(cSVOP_sv);
    RETURN;
}

Of these three lines, all four symbols are in fact macros:

  1. dSP declares some local variables for tracking state, used by later macros
  2. cSVOP_sv fetches the actual SV pointer out of the SVOP itself. This will be the one holding the constant's value
  3. XPUSHs extends the (data) stack if necessary, then pushes it there
  4. RETURN resynchronises the interpreter state from the local variables, and arranges for the opcode function to return the next opcode, for the toplevel instruction loop

The pp_padsv function is somewhat more complex, but the essential parts of it are quite similar; the following example is heavily paraphrased:

PP(pp_padsv)
{
    SV ** const padentry = &(PAD_SVl(op->op_targ));
    XPUSHs(*padentry);
    RETURN;
}

This time, rather than the cSVOP_sv which takes the SV out of the op itself, we use PAD_SVl which looks up the SV in the currently-active pad, by using the target index which is stored in the op.

When the isa feature was added, its main pp_isa opcode function was actually quite small: (github.com/Perl/perl5).

--- a/pp.c
+++ b/pp.c
@@ -7143,6 +7143,18 @@ PP(pp_argcheck)
     return NORMAL;
 }
 
+PP(pp_isa)
+{
+    dSP;
+    SV *left, *right;
+
+    right = POPs;
+    left  = TOPs;
+
+    SETs(boolSV(sv_isa_sv(left, right)));
+    RETURN;
+}
+

Since OP_ISA is a BINOP it is expecting to find two arguments on the stack; traditionally these are called left and right. This opcode function simply takes those two values and calls the sv_isa_sv() function, which returns a boolean truth value. The boolSV helper function returns an SV pointer to represent this boolean value, which is then used as the result of the opcode itself.

As a small performance optimsation, this function decides to only POP one argument, before changing the top-of-stack value to its result using SETs. This is equivalent to POPing two of them and PUSHing its result, except that it doesn't have to alter the stack pointer as many times.

For more of a look at how the stack works, you could also take a look at another post from my series on Parser Plugins: Part 3a - The Stack.

Lets now take a look at implementing our banana feature for real. Recall in part 4 we added the pp_banana function with some placeholder content that just died with a panic message if invoked. We'll now replace that with a real implementation:

leo@shy:~/src/bleadperl/perl [git]
$ nvim pp.c 

leo@shy:~/src/bleadperl/perl [git]
$ git diff pp.c
diff --git a/pp.c b/pp.c
index 93141454e1..bced3d23ea 100644
--- a/pp.c
+++ b/pp.c
@@ -7203,7 +7203,15 @@ PP(pp_cmpchain_dup)
 
 PP(pp_banana)
 {
-    DIE(aTHX_ "panic: we have no bananas");
+    dSP;
+    const char *s;
+    STRLEN len;
+    SV *arg = POPs;
+
+    s = SvPV(arg, len);
+
+    PUSHs(newSVpvn_rot13(s, len));
+    RETURN;
 }
 
 /*

Now lets rebuild perl and try it out:

leo@shy:~/src/bleadperl/perl [git]
$ make -j4 perl
...

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -E 'use experimental "banana"; say ban "Hello, world!" ana;'
Uryyb, jbeyq!

Well it certainly looks plausible - we've got back a different string of the same length, with different letters but in the same capitalisation and identical non-letter characters. Lets compare with something like tr to see if it's correct:

leo@shy:~/src/bleadperl/perl [git]
$ echo "Uryyb, jbeyq!" | tr "A-Za-z" "N-ZA-Mn-za-m"
Hello, world!

Seems good. But it turns out we've still missed something. This function has a memory leak. We can demonstrate it by writing a small example that calls ban ... ana a large number of times (say, a thousand), and printing the total count of SVs on the heap before and after. There's a handy function in perl's unit test suited called XS::APItest::sv_count we can use here:

leo@shy (1 job):~/src/bleadperl/perl [git]
$ ./perl -Ilib -I. -MXS::APItest=sv_count -E \
  'use experimental "banana";
   say sv_count();
   ban "Hello, world!" ana for 1..1000;
   say sv_count();'
5321
6321

Oh dear. The SV count is a thousand higher afterwards than before, suggesting we leaked an SV on every call.

It turns out this is because of an optimisation that the interpreter uses, where SV pointers on Perl data stack don't actually contribute to reference counting. When values get POP'ed from the stack we don't have to decrement their refcount; when values get pushed we don't increment it. This saves an amount of runtime performance to not have to be adjusting those counts all the time. The consequence here is that we have to be a bit more careful when returning newly-constructed values. We must mark the value as mortal, which means we are saying that its reference count is somehow artificially high (because of that pointer on the stack), and perl should decrement the reference count at some point soon, when it next discards temporary values.

Because this sort of thing is done a lot, there is a handy macro called mPUSHs, which mortalizes an SV when it pushes it to the data stack. We can call that instead:

$ git diff pp.c
...
+    mPUSHs(newSVpvn_rot13(s, len));
+    RETURN;
 }
 
 /*

Now when we try our leak test we find the same SV count before and after, meaning no leak has occurred:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -I. -MXS::APItest=sv_count -E ...
5321
5321

We may be onto a winner here.

Index | < Prev | Next >

2021/02/17

Writing a Perl Core Feature - part 7: Support functions

Index | < Prev | Next >

So far in this series we've seen several modifications and small additions, to add the required bits and pieces for our new feature to various parts of the perl interpreter. Often when adding anything but the very smallest and simplest of features or changes, it becomes necessary not just to modify existing things, but to add some new support functions as well.

For example, adding the isa feature required adding a new function to actually implement the bulk of the operation, which is then called from the pp_isa opcode function. This helper function was added into universal.c in this commit: (github.com/Perl/perl5).

--- a/universal.c
+++ b/universal.c
@@ -187,6 +187,74 @@ Perl_sv_derived_from_pvn(pTHX_ SV *sv, const char *const name, const STRLEN len,
     return sv_derived_from_svpvn(sv, NULL, name, len, flags);
 }
 
+/*
+=for apidoc sv_isa_sv
+
+Returns a boolean indicating whether the SV is an object reference and is
+derived from the specified class, respecting any C<isa()> method overloading
+it may have. Returns false if C<sv> is not a reference to an object, or is
+not derived from the specified class.
...
+
+=cut
+
+*/
+
+bool
+Perl_sv_isa_sv(pTHX_ SV *sv, SV *namesv)
+{
+    GV *isagv;
+
+    PERL_ARGS_ASSERT_SV_ISA_SV;
+
+    if(!SvROK(sv) || !SvOBJECT(SvRV(sv)))
+        return FALSE;
+
...
+    return sv_derived_from_sv(sv, namesv, 0);
+}
+
 /*
 =for apidoc sv_does_sv

Like all good helper functions, this one is named beginning with a Perl_ prefix and takes as its first parameter the pTHX_ macro. To make the function properly visible to other code within the interpreter, an entry needed adding to the embed.fnc file which lists all of the functions. (github.com/Perl/perl5).

--- a/embed.fnc
+++ b/embed.fnc
@@ -1777,6 +1777,7 @@ ApdR      |bool   |sv_derived_from_sv|NN SV* sv|NN SV *namesv|U32 flags
 ApdR   |bool   |sv_derived_from_pv|NN SV* sv|NN const char *const name|U32 flags
 ApdR   |bool   |sv_derived_from_pvn|NN SV* sv|NN const char *const name \
                                     |const STRLEN len|U32 flags
+ApdRx  |bool   |sv_isa_sv      |NN SV* sv|NN SV* namesv
 ApdR   |bool   |sv_does        |NN SV* sv|NN const char *const name
 ApdR   |bool   |sv_does_sv     |NN SV* sv|NN SV* namesv|U32 flags
 ApdR   |bool   |sv_does_pv     |NN SV* sv|NN const char *const name|U32 flags

This file stores pipe-separated columns, containing:

  • A set of flags - in this case marking an API function (A), having the Perl_ prefix (p), with documentation (d), whose return value must not be ignored (R) and is currently experimental (x)
  • The return type
  • The name
  • Argument types in all remaining columns; where NN prefixes an argument which must not be passed as NULL

For our new banana feature lets now think of some semantics. Perhaps, given the example code we saw yesterday, it should return a new string built from its argument. For arbitrary reasons of having something interesting yet unlikely in practice, lets make it return a ROT13 transformed version.

Lets now add a helper function to do this - something to construct a new string SV containing the ROT13'ed transformation of the given input. We'll begin by picking a new name for this new function, and adding a definition line into the embed.fnc list, and running the regen/embed.pl regeneration script:

leo@shy:~/src/bleadperl/perl [git]
$ nvim embed.fnc 

leo@shy:~/src/bleadperl/perl [git]
$ git diff embed.fnc
diff --git a/embed.fnc b/embed.fnc
index eb7b47601a..74946566e7 100644
--- a/embed.fnc
+++ b/embed.fnc
@@ -1488,6 +1488,7 @@ ApdR      |SV*    |newSVuv        |const UV u
 ApdR   |SV*    |newSVnv        |const NV n
 ApdR   |SV*    |newSVpv        |NULLOK const char *const s|const STRLEN len
 ApdR   |SV*    |newSVpvn       |NULLOK const char *const buffer|const STRLEN len
+ApdR   |SV*    |newSVpvn_rot13 |NN const char *const s|const STRLEN len
 ApdR   |SV*    |newSVpvn_flags |NULLOK const char *const s|const STRLEN len|const U32 flags
 ApdR   |SV*    |newSVhek       |NULLOK const HEK *const hek
 ApdR   |SV*    |newSVpvn_share |NULLOK const char* s|I32 len|U32 hash

leo@shy:~/src/bleadperl/perl [git]
$ perl regen/embed.pl 
Changed: proto.h embed.h

Take a look now at the changes it's made.

  • A new macro in embed.h which calls the full Perl_-prefixed function name from its shorter alias. The macro makes sure to pass in the aTHX_ parameter, meaning we don't have to remember that all the time
  • A prototype and an arguments assertion macro for the function in proto.h

To actually implement this function we should pick a file to put it in. Since it's creating a new SV, the file sv.c seems reasonable. For neatness we'll put it right next to the other newSVpv* functions, in the same order as the list in embed.fnc:

leo@shy:~/src/bleadperl/perl [git]
$ nvim sv.c

leo@shy:~/src/bleadperl/perl [git]
$ git diff sv.c
diff --git a/sv.c b/sv.c
index e54d0a078f..156e64e879 100644
--- a/sv.c
+++ b/sv.c
@@ -9397,6 +9397,43 @@ Perl_newSVpvn(pTHX_ const char *const buffer, const STRLEN len)
     return sv;
 }
 
+/*
+=for apidoc newSVpvn_rot13
+
+Creates a new SV and copies a string into it by transforming letters by the
+ROT13 algorithm, and copying other bytes literally. The string may contain
+C<NUL> characters and other binary data. The reference count for the new SV
+is set to 1.
+
+=cut
+*/
+
+SV *
+Perl_newSVpvn_rot13(pTHX_ const char *const s, const STRLEN len)
+{
+    char *dp;
+    const char *sp = s, *send = s + len;
+    SV *sv = newSV(len);
+
+    dp = SvPVX(sv);
+    while(sp < send) {
+        char c = *sp;
+        if(isLOWER(c))
+            *dp = 'a' + (c - 'a' + 13) % 26;
+        else if(isUPPER(c))
+            *dp = 'A' + (c - 'A' + 13) % 26;
+        else
+            *dp = c;
+
+        sp++; dp++;
+    }
+
+    *dp = '\0';
+    SvPOK_on(sv);
+    SvCUR_set(sv, len);
+    return sv;
+}
+
 /*
 =for apidoc newSVhek

I don't want to spend a large amount of time or space in this post to explain the whole function, but as a brief summary,

  1. newSV() creates a new SV with a string buffer big enough to store the content (it internally adds 1 more to accomodate the terminating NUL)
  2. The pointers sp and dp are initialised to point into the source and destination string buffers
  3. Characters are copied one at a time; performing the ROT13 algorithm on lower or uppercase letters and passing anything else transparently
  4. The terminating NUL is appended
  5. The current string size and stringiness flag are set on the new SV, which is then returned

If we run the porting tests again now, we'll find one gets upset:

leo@shy:~/src/bleadperl/perl [git]
$ make test_porting
...
porting/args_assert.t ..... 1/? # Failed test 2 - PERL_ARGS_ASSERT_NEWSVPVN_ROT13 is 
declared but not used at porting/args_assert.t line 64

This test is unhappy because it didn't find any code that actually called the argument-asserting macro which the regeneration script added to proto.h. This is the macro that asserts on the types of arguments to the function. We can fix that by remembering to use it in the function's definition:

leo@shy:~/src/bleadperl/perl [git]
$ nvim sv.c

leo@shy:~/src/bleadperl/perl [git]
$ git diff sv.c
diff --git a/sv.c b/sv.c
index e54d0a078f..d63c8a7bbb 100644
--- a/sv.c
+++ b/sv.c
...
+SV *
+Perl_newSVpvn_rot13(pTHX_ const char *const s, const STRLEN len)
+{
+    char *dp;
+    const char *sp = s, *send = s + len;
+    SV *sv;
+
+    PERL_ARGS_ASSERT_NEWSVPVN_ROT13;
+
+    sv = newSV(len);
+
+    dp = SvPVX(sv);
...

leo@shy:~/src/bleadperl/perl [git]
$ make test_porting
...
Result: PASS

As core functions go this one is actually pretty terrible. It presumes ASCII (and doesn't work properly on EBCDIC platforms), and requires careful handling in the caller to set the UTF8 flag if required. But overall it's at least good enough for demonstration purposes for our feature. In the next part we'll hook this function up with the opcode implementation and finally see our new feature in action.

Index | < Prev | Next >

2021/02/15

Writing a Perl Core Feature - part 6: Parser

Index | < Prev | Next >

In the previous part I introduced the concepts of the lexer and the parser, and the way they combine together to form part of the compiler which actually translates the incoming program source code into the in-memory optree where it can be executed. We took a look at some parser changes, and the way that the isa operator was able to work with that alone without needing a corresponding change in the parser, but also noted that most non-trivial syntax additions will require concurrent changes to both the parser and the lexer to cope with it.

In particular, although it is the lexer that creates and emits tokens into the parser, it is the parser which maintains the list of what token types it expects. It is there where new token types have to be added.

The isa operator did not need to make any changes in the parser, so for today's article we'll look instead at the recently-added try/catch syntax, which did. That was first added in this commit, though subsequent modifications have been made to it. Go take a look now - perhaps you will find parts of it recognisable, similar to the changes we've already seen with isa and made for our new banana feature we have been building up.

Similar to the situation with features, warnings, and opcodes, the parser is maintained primarily by changes to one source file which is then run through a regeneration script to update several other files that are generated from it. The source of truth in this case is the file perly.y, and the regeneration script for it is regen_perly.pl (neither of which live in the regen directory for reasons lost to the mists of time).

The part of the try/catch commit which updated the parser generation file had two parts to it: (github.com/Perl/perl5).

--- a/perly.y
+++ b/perly.y
@@ -69,6 +69,7 @@
 %token <ival> FORMAT SUB SIGSUB ANONSUB ANON_SIGSUB PACKAGE USE
 %token <ival> WHILE UNTIL IF UNLESS ELSE ELSIF CONTINUE FOR
 %token <ival> GIVEN WHEN DEFAULT
+%token <ival> TRY CATCH
 %token <ival> LOOPEX DOTDOT YADAYADA
 %token <ival> FUNC0 FUNC1 FUNC UNIOP LSTOP
 %token <ival> MULOP ADDOP
@@ -459,6 +460,31 @@ barestmt:  PLUGSTMT
                                  newFOROP(0, NULL, $mexpr, $mblock, $cont));
                          parser->copline = (line_t)$FOR;
                        }
+       |       TRY mblock[try] CATCH PERLY_PAREN_OPEN 
+                       { parser->in_my = 1; }
+               remember scalar 
+                       { parser->in_my = 0; intro_my(); }
+               PERLY_PAREN_CLOSE mblock[catch]
+                       {
...
+                       }
        |       block cont
                        {
                          /* a block is a loop that happens once */

Of these two parts, the first is the bit that defines two new token types. These are types we can use in the lexer - recall from the previous part we saw the lexer emit these tokens as PREBLOCK(TRY) and PREBLOCK(CATCH).

The second part of this change gives the actual parsing rules which the parser uses to recognise the new syntax. This appears in the form of a new alternative to the set of possible rules that the parser may use to create a barestmt (each alternative is separated by | characters). The rules on how to recognise this one are made from a mix of basic tokens (those in capitals) and other grammar rules (those in lower case). The four basic tokens here are the keyword try, an open and close parenthesis pair (named represented by tokens called PERLY_PAREN_OPEN and PERLY_PAREN_CLOSE) and the keyword catch.

In effect we can imagine if the rule were expressed instead using literal strings:

barestmt =
    ...
    | "try" mblock "catch" "(" scalar ")" mblock

The other grammar rules that are referred to by this one define the basic shape of a block of code (the one called mblock), and a single scalar variable (the one called scalar). The other parts that I omitted in this simplified version (remember and the two action blocks relating to parser->in_my) are involved with ensuring that the catch variable part of the syntax is recognised as creating a new variable. It pretends that there had been a my keyword just before the variable name, so the name introduces a new variable.

Don't worry too much about the contents of the main action block for this try/catch syntax rule. That's all specific to how to build up the optree for this particular syntax, and in any case was changed in a later commit to move most of it out to a helper function. We'll come back in a moment to see what we can put there for our new syntax.

Lets now begin adding the tokenizing and parsing rules for our new banana feature. Recall from part 5 we decided we'd add two new token types to represent the two basic keywords. We can do that by adding them to the collection of tokens at the top of the perly.y file and running the regeneration script:

leo@shy:~/src/bleadperl/perl [git]
$ nvim perly.y 

leo@shy:~/src/bleadperl/perl [git]
$ git diff perly.y
diff --git a/perly.y b/perly.y
index 184fb0c158..7bbb64f202 100644
--- a/perly.y
+++ b/perly.y
@@ -77,6 +77,7 @@
 %token <ival> LOCAL MY REQUIRE
 %token <ival> COLONATTR FORMLBRACK FORMRBRACK
 %token <ival> SUBLEXSTART SUBLEXEND
+%token <ival> BAN ANA
 
 %type <ival> grammar remember mremember
 %type <ival>  startsub startanonsub startformsub

leo@shy:~/src/bleadperl/perl [git]
$ perl regen_perly.pl 
Changed: perly.act perly.tab perly.h

At this point if you want you could take a look at the change the script introduced in perly.h - it just adds the two new token types to the main enum yytokentype, where the tokizer and the parser can use them. Don't worry about the other two files (perly.act and perly.tab) - they are long tables of automatically generated output; mostly numbers which help the parser to maintain its internal state. The change there won't be particularly meaningful to look at.

As these new token types now exist in perly.h we can use them to update toke.c to recognise them:

leo@shy:~/src/bleadperl/perl [git]
$ nvim toke.c 

leo@shy:~/src/bleadperl/perl [git]
$ git diff toke.c
diff --git a/toke.c b/toke.c
index 628a79fb43..9f86e110ce 100644
--- a/toke.c
+++ b/toke.c
@@ -7686,6 +7686,11 @@ yyl_word_or_keyword(pTHX_ char *s, STRLEN len, I32 key, I32 orig_keyword, struct
     case KEY_accept:
         LOP(OP_ACCEPT,XTERM);
 
+    case KEY_ana:
+        Perl_ck_warner_d(aTHX_
+            packWARN(WARN_EXPERIMENTAL__BANANA), "banana is experimental");
+        TOKEN(ANA);
+
     case KEY_and:
         if (!PL_lex_allbrackets && PL_lex_fakeeof >= LEX_FAKEEOF_LOWLOGIC)
             return REPORT(0);
@@ -7694,6 +7699,11 @@ yyl_word_or_keyword(pTHX_ char *s, STRLEN len, I32 key, I32 orig_keyword, struct
     case KEY_atan2:
         LOP(OP_ATAN2,XTERM);
 
+    case KEY_ban:
+        Perl_ck_warner_d(aTHX_
+            packWARN(WARN_EXPERIMENTAL__BANANA), "banana is experimental");
+        TOKEN(BAN);
+
     case KEY_bind:
         LOP(OP_BIND,XTERM);

Now we can rebuild perl and test some examples:

leo@shy:~/src/bleadperl/perl [git]
$ make -j4 perl

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -E 'use feature "banana"; say ban "a string here" ana;'
banana is experimental at -e line 1.
banana is experimental at -e line 1.
syntax error at -e line 1, near "say ban"
Execution of -e aborted due to compilation errors.

We get our expected warnings about the experimental syntax, and then a syntax error. This is because, while the lexer recognises our keywords, we haven't yet written a parser rule to tell the parser what to do with it. But we can at least tell the lexer recognised the keywords, because if we test without enabling the feature we get a totally different error:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -E 'say ban "a string here" ana;'
Bareword found where operator expected at -e line 1, near ""a string here" ana"
        (Missing operator before ana?)
syntax error at -e line 1, near ""a string here" ana"
Execution of -e aborted due to compilation errors.

Lets now add a grammar rule to let the parser recognise this syntax:

leo@shy:~/src/bleadperl/perl [git]
$ nvim perly.y 

leo@shy:~/src/bleadperl/perl [git]
$ git diff perly.y
...
                    SUBLEXSTART listexpr optrepl SUBLEXEND
                        { $$ = pmruntime($PMFUNC, $listexpr, $optrepl, 1, $<ival>2); }
+       |       BAN expr ANA
+                       { $$ = newUNOP(OP_BANANA, 0, $expr); }
        |       BAREWORD
        |       listop
...

leo@shy:~/src/bleadperl/perl [git]
$ make -j4 perl

With this new definition our new syntax:

  • is recognised as a basic term expression, meaning it can stand in the same parts of syntax as other expressions such as constants or variables
  • requires an expr expression between the ban and ana keywords, meaning it will accept any sort of complex expression such as a string concatenation operator or function call

After the grammar rule which tells the parser how to recognise the new syntax, we've added a block of code telling it how to implement it. This is translated into some real C code that forms part of the parser, so we can invoke any bits of perl interpreter internals from here. When it gets translated a few special variables are replaced in the code - these are the ones prefixed with $ symbols. The $$ variable is where the parser is expecting to find the output of this particular grammar rule; it's where we put the optree we construct to represent it. For arguments into that we can use the other variable, named after the sub-rule used to parse it - $expr. That will contain the output of parsing that part of the syntax - again an optree.

In this action block it is now a simple matter of generating an optree for the OP_BANANA opcode we added in part 4. Recall that was an op of type UNOP, so we use the newUNOP() function to do this, taking as its child subtree the expression between the two keywords which we got in $expr. We just put that result into the $$ result variable, and we're done.

Now we can try using it:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -E 'use feature "banana"; say ban "a string here" ana;'
banana is experimental at -e line 1.
banana is experimental at -e line 1.
panic: we have no bananas at -e line 1.

Hurrah! We get the panic message we added as a placeholder when we created the Perl_pp_banana function back in part 4. The pieces are now starting to come together - in the next part we'll start implementing the actual behaviour behind this syntax.

Lets not forget to add the new "experimental" warnings to pod/perldiag.pod in order to keep the porting test happy:

leo@shy:~/src/bleadperl/perl [git]
$ nvim pod/perldiag.pod 

$ git diff pod/perldiag.pod
diff --git a/pod/perldiag.pod b/pod/perldiag.pod
index 98d159dc21..66b0a4aa40 100644
--- a/pod/perldiag.pod
+++ b/pod/perldiag.pod
@@ -519,6 +519,11 @@ wasn't a symbol table entry.
 (P) An internal request asked to add a scalar entry to something that
 wasn't a symbol table entry.
 
+=item banana is experimental
+
+(S experimental::banana) This warning is emitted if you use the banana
+syntax (C<ban> ... C<ana>). This syntax is currently experimental.
+
 =item Bareword found in conditional
 

For now there's one last thing we can look at. Even though we don't have an implementation behind the syntax, we can at least compile it into an optree. We can inspect the generated optree by using the -MO=Concise compiler backend:

leo@shy:~/src/bleadperl/perl [git]
$ ./perl -Ilib -MO=Concise -E 'use feature "banana"; say ban "a string here" ana;'
banana is experimental at -e line 1.
banana is experimental at -e line 1.
7  <@> leave[1 ref] vKP/REFC ->(end)
1     <0> enter v ->2
2     <;> nextstate(main 3 -e:1) v:%,us,{,fea=15 ->3
6     <@> say vK ->7
3        <0> pushmark s ->4
5        <1> banana sK/1 ->6
4           <$> const(PV "a string here") s ->5
-e syntax OK

I won't go into the full details here - for that you can read the documentation at B::Concise. For now I'll just remark that we can see the banana op here, as an UNOP (the 1 flag before it), sitting in the optree as a child node of say, with the string constant as its own child op. When working with optree parsing, the B::Concise module is a handy debugging tool you can use to inspect the generated optree and ensure it has the shape you expected.

Index | < Prev | Next >