collapse Blogs I Read
collapse Table of Contents
  1. Jonathan Pryor's web log
    1. HackWeek V
    2. mdoc Repository Format History
    3. Assembly Versioning with mdoc
    4. Caching mdoc's ASP.NET-generated HTML
    5. Configuring the ASP.NET front-end for mdoc
    6. Assembling Documentation with mdoc
    7. Exporting mdoc Repositories to Microsoft XML Documentation
    8. Customizing mdoc's Static HTML Output
    9. mdoc XML Schema
    10. Writing Documentation for mdoc
    11. Using mdoc
    12. What is mdoc?
      1. Why the mdoc repository?
      2. Why use mdoc?
    13. Re-Introducing mdoc
    14. Linq to SQL on Mono 2.6: NerdDinner on Mono
    15. Mono.Data.Sqlite & System.Data in MonoTouch 1.2 [Preview]
      1. What Does This Mean?
      2. Example?
      3. What's Missing?
      4. Why Provide Mono.Data.Sqlite?
      5. What About Data Binding?
      6. Conclusion
    16. Linq to SQL on Mono Update: NerdDinner on Mono
    17. Mono 2.4 and mdoc-update
    18. DbLinq and Mono
      1. DbLinq On Mono
      2. DbLinq In Mono
    19. Extension Method Documentation
    20. How To Defend Against Software Patent FUD
    21. Unix Signal Handling In C#
    22. Mono and Mixed Mode Assembly Support
    23. So you want to parse a command line...
    24. Re-Introducing monodocer
      1. Monodocer
      2. monodocer -importecmadoc
      3. Optimizing monodocer -importecmadoc
      4. Conclusion
    25. POSIX Says The Darndest Things
    26. Mono.Fuse 0.4.1
      1. Mac OS X HOWTO
      2. Known Issues
      3. Download
      4. GIT Repository
    27. When Comparisons Fail
    28. Novell, Microsoft, & Patents
    29. Mono.Fuse 0.4.0
      1. API Changes from the previous release:
      2. Download
      3. GIT Repository
    30. Naming, Mono.Fuse Documentation
    31. Mono.Fuse 0.3.0
      1. API Changes from the previous release:
      2. Download
      3. GIT Repository
    32. Miguel's ReflectionFS
    33. Mono.Fuse, Take 2.1!
    34. Mono.Fuse, Take 2!
    35. Announcing Mono.Fuse
      1. Why?
      2. What about SULF?
      3. Implementation
        1. mono
        2. mcs
      4. HOWTO
      5. Questions
    36. Performance Comparison: IList<T> Between Arrays and List<T>
    37. System.Diagnostics Tracing Support
    38. Mono.Unix Reorganization
    39. Major Change to Nullable Types
    40. Frogger under Mono
    41. Mono.Unix Documentation Stubs

Jonathan Pryor's web log

HackWeek V

Last week was HackWeek V, during which I had small goals, yet had most of the time eaten by unexpected "roadblocks."

The week started with my mis-remembering OptionSet behavior. I had thought that there was a bug with passing options containing DOS paths, as I thought the path would be overly split:

string path = null;
var o = new OptionSet () {
	{ "path=", v => path = v },
o.Parse (new[]{"-path=C:\path"});

Fortunately, my memory was wrong: this works as expected. Yay.

What fails is if the option supports multiple values:

string key = null, value = null;
var o = new OptionSet () {
	{ "D=", (k, v) => {key = k; value = v;} },
o.Parse (new[]{"-DFOO=C:\path"});

The above fails with a OptionException, because the DOS path is split, so OptionSet attempts to send 3 arguments to an option expecting 2 arguments. This isn't allowed.

The patch to fix the above is trivial (most of that patch is for tests). However, the fix didn't work at first.

Enter roadblock #1: String.Split() can return too many substrings. Oops.

So I fixed it. That only killed a day...

Next up, I had been sent an email showing that OptionSet had some bugs when removing by index. I couldn't let that happen...and being in a TDD mood, I first wrote some unit tests to describe what the IList<T> semantics should be. Being in an over-engineering mood, I wrote a set of "contract" tests for IList<T> in Cadenza, fixed some Cadenza bugs so that Cadenza would pass the new ListContract, then merged ListContract with the OptionSet tests.

Then I hit roadblock #2 when KeyedCollection<TKey, TItem> wouldn't pass my ListContract tests, as it wasn't exception safe. Not willing to give up on ListContract, I fixed KeyedCollection so it would now pass my ListContract tests, improving compatibility with .NET in the process, which allowed me to finally fix the OptionSet bugs.

I was then able to fix a mdoc export-html bug in which index files wouldn't always be updated, before starting to investigate mdoc assemble wanting gobs of memory.

While pondering how to figure out why mdoc assemble wanted 400MB of memory, I asked the folks on ##csharp on freenode if there were any Mono bugs preventing their SpikeLite bot from working under Mono. They kindly directed me toward a bug in which AppDomain.ProcessExit was being fired at the wrong time. This proved easier than I feared (I feared it would be beyond me).

Which left me with pondering a memory "leak." It obviously couldn't be a leak with a GC and no unmanaged memory to speak of, but what was causing so much memory to be used? Thus proceeded lots of Console.WriteLine(GC.GetTotalMemory(false)) calls and reading the output to see where the memory use was jumping (as, alas I found Mono's memory profiler to be less than useful for me, and mono's profiler was far slower than a normal run). This eventually directed me to the problem:

I needed, at most, two XmlNode values from an XmlDocument. An XmlDocument loaded from a file that could be very small or large-ish (0.5MB). Thousands of such files. At once.

That's when it dawned on me that storing XmlNodes in a Dictionary loaded from thousands of XmlDocuments might not be such a good idea, as each XmlNode retains a reference to the XmlDocument it came from, so I was basically copying the entire documentation set into memory, when I only needed a fraction of it. Doh!

The fix was straightforward: keep a temporary XmlDocument around and call XmlDocument.ImportNode to preserve just the data I needed.

Memory use plummeted to less than one tenth what was previously required.

Along the way I ran across and reported an xbuild bug (since fixed), and filed a regression in gmcs which prevented Cadenza from building.

Overall, a productive week, but not at all what I had originally intended.

Posted on 15 Jun 2010 | Path: /development/mono/ | Permalink

mdoc Repository Format History

Time to wrap up this overly long series on mdoc. We covered:

To close out this series, where did the mdoc repository format come from? It mostly came from Microsoft, actually.

Taking a step back, "in the beginning," as it were, the Mono project saw the need for documentation in January 2002. I wasn't involved then, but perusing the archives we can see that csc /doc output was discarded early because it wouldn't support translation into multiple languages. NDoc was similarly discarded because it relied on csc /doc documentation. I'm sure a related problem at the time was that Mono's C# compiler didn't support the /doc compiler option (and wouldn't begin to support /doc until April 2004), so there would be no mechanism to extract any inline documentation anyway.

By April 2003 ECMA standardization of the Common Language Infrastructure was apparently in full force, and the standardization effort included actual class library documentation. The ECMA documentation is available within The ECMA-335 documentation also included a DTD for the documentation contained therein, and it was a superset of the normal C# XML documentation. The additional XML elements provided what XML documentation lacked: information available from the assembly, such as actual parameter types, return types, base class types, etc. There was one problem with ECMA-335 XML, though: it was gigantic, throwing everything into a single 7MB+ XML file.

To make this format more version-control friendly (can you imagine maintaining and viewing diffs on a 7+MB XML file?), Mono "extended" the ECMA-335 documentation format by splitting it into one file per type. This forms the fundamental basis of the mdoc repository format (and is why I say that the repository format came from Microsoft, as Microsoft provided the documentation XML and DTD to ECMA). This is also why tools such as mdoc assemble refer to the format as ecma. The remainder of the Mono extensions were added in order to fix various documentation bugs (e.g. to distinguish between ref vs. out parameters, to better support generics), etc.

In closing this series, I would like to thank everyone who has ever worked on Monodoc and the surrounding tools and infrastructure. It wouldn't be anywhere near as useful without them.

Posted on 20 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Assembly Versioning with mdoc

Previously, we mentioned as an aside that the Type.xml files within an mdoc repository contained //AssemblyVersion elements. Today we will discuss what they're for.

The //AssemblyVersion element records exactly one thing: which assembly versions a type and member was found in. (The assembly version is specified via the AssemblyVersionAttribute attribute.) With a normal assembly versioning policy, this allows monodoc to show two things: which version added the type/member, and (by inference) which version(s) removed the member.

For example, consider the NetworkStream.Close method. This method was present in .NET 1.0 which overrode Stream.Close. However, in .NET 2.0 the override was removed entirely.

The //AssemblyVersion attribute allows the mdoc repository to track such versioning changes; for example, consider the mdoc-generated NetworkStream.xml file. The //Member[@MemberName='Close']/AssemblyInfo/AssemblyVersion elements contain only an entry for 1.0.5000.0 (corresponding to .NET 1.1) on line 536. Compare to the //Member[@MemberName='CanWrite']/AssemblyInfo/AssemblyVersion elements (for the NetworkStream.CanWrite property) which has //AssemblyVersion elements for 1.0.5000.0 and From this, we can deduce that NetworkStream.Close was present in .NET 1.1, but was removed in .NET 2.0.

When viewing type and member documentation, monodoc and the ASP.NET front end will show the assembly versions that have the member:

NetworkStream.Close -- notice only 1.0.5000.0 is a listed assembly version.

There are two limitations with the version tracking:

  1. It only tracks types and members. For example, attributes, base classes, and interfaces may be added or removed across versions; these are not currently tracked.
  2. It uses the assembly version to fill the <AssemblyVersion> element.

The second point may sound like a feature (isn't it the point?), but it has one downfall: auto-generated assembly versions. You can specify an auto-generated assembly version by using the * for some components in the AssemblyVersionAttribute constructor:

[assembly: AssemblyVersion("1.0.*.*")]

If you do this, every time you rebuild the assembly the compiler will dutifully generate a different assembly number. For example, the first time you might get a compiler version of 1.0.3666.19295, while the second recompilation the compiler will generate 1.0.3666.19375. Since mdoc assigns no meaning to the version numbers, it will create //AssemblyVersion elements for each distinct version found.

The "advantage" is that you can know on which build a member was added. (If you actually care...)

The disadvantage is a major bloating of the mdoc repository, as you add at least 52*(1+M) bytes to each file in the mdoc repository for each unique assembly version (where M is the number of members within the file, as each member is separately tracked). It will also make viewing the documentation distracting; imagine seeing 10 different version numbers for a member, which all differ in the build number. That much noise would make the feature ~useless.

As such, if you're going to use mdoc, I highly suggest not using auto-generated assembly version numbers.

Next time, we'll wrap up this series with a history of the mdoc repository format.

Posted on 19 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Caching mdoc's ASP.NET-generated HTML

Last time we discussed configuring the ASP.NET front-end to display monodoc documentation. The display of extension methods within monodoc and the ASP.NET front-end is fully dynamic. This has it's pros and cons.

On the pro side, if/when you install additional assembled documentatation sources, those sources will be searched for extension methods and they will be shown on all matching types. This is very cool.

On the con side, searching for the extension methods and converting them into HTML takes time -- there is a noticable delay when viewing all members of a type if there are lots of extension methods. On heavily loaded servers, this may be detrimental to overall performance.

If you're running the ASP.NET front-end, you're not regularly adding documentation, and you have Mono 2.6, you can use the mdoc export-html-webdoc command to pre-render the HTML files and cache the results. This will speed up future rendering.

For example, consider the url http://localhost:8080/index.aspx?link=T:System.Collections.Generic.List`1/* (which shows all of the List<T> members). This is a frameset, and the important frame here is http://localhost:8080/monodoc.ashx?link=T:System.Collections.Generic.List`1/* which contains the member listing (which includes extension methods). On my machine, it takes ~2.0s to download this page:

$ time curl -s \
	'http://localhost:8080/monodoc.ashx?link=T:System.Collections.Generic.List`1/*' \
	> /dev/null

real	0m2.021s
user	0m0.003s
sys	0m0.002s

In a world where links need to take less than 0.1 seconds to be responsive, this is...pretty bad.

After running mdoc export-html-webdoc (which contains the List<T> docs):

$ time curl -s \
	'http://localhost:8080/monodoc.ashx?link=T:System.Collections.Generic.List`1/*' \
	> /dev/null

real	0m0.051s
user	0m0.003s
sys	0m0.006s

That's nearly 40x faster, and within the 0.1s guideline.

Cache Generation: to generate the cache files, run mdoc export-html-web ASSEMBLED-FILES. ASSEMBLED-FILES consists of the .tree or .zip files which are generated by mdoc assemble and have been installed into $prefix/lib/monodoc/sources:

$ mdoc export-html-webdoc $prefix/lib/monodoc/sources/

(Where $prefix is your Mono installation prefix, e.g. /usr/lib/monodoc/sources/

This will create a directory tree within $prefix/lib/monodoc/sources/cache/Demo. Restarting the ASP.NET front-end will allow it to use the cache.

If you don't want to generate the cache in another directory, use the -o=PREFIX option. This is useful if you're updating an existing cache on a live server and you don't want to overwrite/replace the existing cache (it's a live server!) -- generate the cache elsewhere, then move the files when the server is offline.

If you have lots of time on your hands, you could process all assembled documentation with:

$ mdoc export-html-webdoc $prefix/lib/monodoc/sources/*.zip

Limitations: It should be noted that this is full of limitations, so you should only use it if performance is really important. Limitations include:

Next time, we'll cover mdoc's support for assembly versioning.

Posted on 15 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Configuring the ASP.NET front-end for mdoc

Last time, we assembled our documentation and installed it for use with monodoc. This is a prerequisite for ASP.NET support (as they both use the same system-wide documentation directory).

Once the documentation is installed (assuming a Linux distro or OSX with the relevant command-line tools installed), you can trivially host a web server which will display the documentation:

$ svn co
# output omitted...
$ cd webdoc
$ xsp2

You will need to change the svn co command to use the same version of Mono that is present on your system. For example, if you have Mono 2.6 installed, change the mono-2-4 to mono-2-6.

Once xsp2 is running, you can point your web browser to http://localhost:8080 to view documentation. This will show the same documentation as monodoc did last time:

System.Array extension methods -- notice With() is listed

For "real" use, setting up using Apache with mod_mono may be preferred (or any of the other options listed at Mono's ASP.NET support page). Configuring mod_mono or anything other than xsp2 is beyond my meager abilities.

Next time, we'll discuss improving the ASP.NET front-end's page rendering performance.

Posted on 14 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Assembling Documentation with mdoc

We previously discussed exporting the mdoc repository into static HTML files using mdoc export-html and into a Microsoft XML Documentation file with mdoc export-msxdoc. Today, we'll discuss exporting documentation with mdoc assemble.

mdoc assemble is used to assemble documentation for use with the monodoc Documentation browser and the ASP.NET front-end. This involves the following steps:

  1. Running mdoc assemble.
  2. Writing a .source file.
  3. Installing the files.

Unfortunately we're taking a diversion from the Windows world, as the monodoc browser and the ASP.NET front-end won't run under Windows (due to limitations in the monodoc infrastructure). I will attempt to fix these limitations in the future.

Running mdoc assemble: mdoc assemble has three arguments of interest:

For our current documentation, we would run:

$ mdoc assemble -o Demo Documentation/

This will create the files Demo.tree and in the current working directory.

The .source file is used to tell the documentation browser where in the tree the documentation should be inserted. It's an XML file that contains two things: a (set of) /monodoc///node elements describing where in the tree the documentation should be inserted, and /monodoc/source elements which specify the files to use. For example:

<?xml version="1.0"?>
  <node label="Demo Library" name="Demo-lib" parent="libraries" />
  <source provider="ecma" basefile="Demo" path="Demo-lib"/>

The /monodoc/node element describes where in the monodoc tree the documentation should be placed. It has three attributes, two of which are required:

The /monodoc/source element describes what file basename to use when looking for the .tree and .zip files. (By convention the .source, .tree, and .zip files share the same basename, but this is not required. The .tree and .zip files must share the same basename, but the .source basename may differ, and will differ if e.g. one .source file pulls in several .tree/.zip pairs.) It has three attributes, all of which are required:

Installing the files. Files need to be installed into $prefix/lib/monodoc/sources. You can obtain this directory with pkg-config(1):

$ cp Demo.source Demo.tree \
    `pkg-config monodoc --variable=sourcesdir`

Now when we run monodoc, we can navigate to the documentation that was just installed:

ObjectCoda.With() documentation inside monodoc.

Additionally, those paying attention on January 10 will have noticed that the With() method we documented is an extension method. Monodoc supports displaying extension methods on the relevant type documentation. In this case, With() is an extension on TSource, which is, for all intents and purposes, System.Object. Thus, if we view the System.Object docs within our local monodoc browser, we will see the With() extension method:

System.Object extension methods -- notice With() is listed.

In fact, we will see With() listed as an extension method on all types (which is arguably a bug, as static types can't have instance methods...).

Furthermore, mdoc export-html will also list extension methods. However, mdoc export-html is far more limited: it will only look for extension methods within the mdoc repositories being processing, and it will only list those methods as extension methods on types within the mdoc repository. Consequently, mdoc export-html will not list e.g. IEnumerable<T> extension methods on types that implement IEnumerable<T>. (It simply lacks the information to do so.)

Examples of mdoc export-html listings of extension methods can be found in the mdoc unit tests and the Cadenza.Collections.CachedSequence<T> docs (which lists a million extension methods because Cadenza.Collections.EnumerableCoda contains a million extension methods on IEnumerable<T>).

Next time, we'll discuss setting up the ASP.NET front end under Linux.

Posted on 13 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Exporting mdoc Repositories to Microsoft XML Documentation

Previously, we discussed how to write documentation and get it into the documentation repository. We also discussed exporting the documentation into static HTML files using mdoc export-html. Today, we'll discuss mdoc export-msxdoc.

mdoc export-msxdoc is used to export the documentation within the mdoc repository into a .xml file that conforms to the same schema as csc /doc. This allows you, if you so choose, to go entirely to externally managed documentation (instead of inline XML) and still be able to produce your Assembly.xml file so that Visual Studio/etc. can provide code completion against your assembly.

There are two ways to invoke it:

$ mdoc export-msxdoc Documentation/en
$ mdoc export-msxdoc -o Demo.xml Documentation/en

The primary difference between these is what files are generated. Within each Type.xml file of the mdoc repository (e.g. ObjectCoda.xml) is a /Type/AssemblyInfo/AssemblyName element.

The first command (lacking -o Demo.xml) will generate a set of .xml files, where the filenames are based on the values of the /Type/AssemblyInfo/AssemblyName element values, in this case Demo.xml. Additionally, a NamespaceSummaries.xml file is generated, containing documentation for any namespaces that were documented (which come from the ns-*.xml files, e.g. ns-Cadenza.xml).

The second command (which specifies -o Demo.xml) will only generate the specified file (in this case Demo.xml).

For this mdoc repository, there is no actual difference between the commands (as only one assembly was documented within the repository), except for the generation of the NamespaceSummaries.xml file. However, if you place documentation from multiple assemblies into the same mdoc repository, the first command will properly generate .xml files for each assembly, while the latter will generate only a single .xml file containing the documentation from all assemblies.

Next time, we'll cover mdoc assemble.

Posted on 12 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Customizing mdoc's Static HTML Output

Last time, we wrote documentation for our Demo.dll assembly. What if we want to improve the looks of those docs, e.g. to change the colors or add additional navigation links for site consistency purposes?

mdoc export-html uses three mechanisms to control output:

The XSLT needs to consume an XML document that has the following structure:

    <CollectionTitle>Collection Title</CollectionTitle>
    <PageTitle>Page Title</PageTitle>
    <Summary>Page Summary</Summary>
    <Signature>Type Declaration</Signature>
    <Remarks>Type Remarks</Remarks>
    <Members>Type Members</Members>
    <Copyright>Documentation Copyright</Copyright>

The contents of each of the //Page/* elements contains HTML or plain text nodes. Specifically:

Contains the Assembly and Namespace name links.
Contains the type name/description.
Contains the type <summary/> documentation.
Contains the type signature, e.g. whether it's a struct or class, implemented interfaces, etc.
Contains type-level <remarks/>.
Contains the documentation for all of the members of the type, including a table for all of the members.
Contains copyright information taken from the mdoc repository, specifically from index.xml's /Overview/Copyright element.

By providing a custom --template XSLT and/or by providing an additional CSS file, you have some degree of control over the resulting documentation.

I'll be the first to admit that this isn't a whole lot of flexibility; there is no control over what CSS class names are used, nor is there any control over what is generated within the /Page//* elements. What this model does allow is for controlling the basic page layout, e.g. to add a site-wide menu system, allowing documentation to be consistent with the rest of the site.

For example, my site uses custom templates to provide a uniform look-and-feel with the rest of their respective sites for the Mono.Fuse and NDesk.Options documentation.

Next time, we'll cover mdoc export-msxdoc.

Posted on 11 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

mdoc XML Schema

Previously, I mentioned that you could manually edit the XML files within the mdoc repository.

What I neglected to mention is that there are only parts of the XML files that you should edit, and that there is an XML Schema file available for all docs.

The mdoc(5) man page lays out which files within the repository (and which parts of those files) are editable. In summary, all ns-*.xml files and the //Docs nodes of all other .xml files are editable, and they should contain ye normal XML documentation elements (which are also documented within the mdoc(5) man page).

The XML Schema can be found in Mono's SVN, at

Posted on 10 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Writing Documentation for mdoc

Last time, we create an assembly and used mdoc to generate a documentation repository containing stubs. Stubs have some utility -- you can view the types, members, and parameter types that are currently present -- but they're far from ideal. We want actual documentation.

Unfortunately, mdoc isn't an AI, and can't write documentation for you. It manages documentation; it doesn't create it.

How do we get actual documentation into the respository? There are three ways:

  1. Manually edit the XML files within the repository directory (if following from last time, this would be all .xml files within the Documentation/en directory.
  2. Use monodoc --edit Documentation/en.
  3. We can continue writing XML documentation within our source code.

Manually editing the files should be self-explanatory; it's not exactly ideal, but it works, and is how I write most of my documentation.

When using monodoc --edit Documentation/en, the contents of Documentation/en will be shown sorted in the tree view by it's assembly name, e.g. in the Mono Documentation → Demo node. When viewing documentation, there are [Edit] links that, when clicked, will allow editing the node (which directly edits the files within Documentation/en.

However, I can't recommend monodoc as an actual editor. It's usability is terrible, and has one major usability flaw: when editing method overloads, most of the documentation will be the same (or similar enough that you'll want to copy everything anyway), e.g. <summary/>, <param/>, etc. The monodoc editor doesn't allow copying all of this at once, but only each element individually. It makes for a very slow experience.

Which brings us to inline XML documentation. mdoc update supports importing XML documentation as produced by csc /doc. So let's edit our source code to add inline documentation:

using System;

namespace Cadenza {
    /// <summary>
    ///  Extension methods on <see cref="T:System.Object" />.
    /// </summary>
    public static class ObjectCoda {
        /// <typeparam name="TSource">The type to operate on.</typeparam>
        /// <typeparam name="TResult">The type to return.</typeparam>
        /// <param name="self">
        ///   A <typeparamref name="TSource" /> containing the value to manipulate.
        ///   This value may be <see langword="null" /> (unlike most other
        ///   extension methods).
        /// </param>
        /// <param name="selector">
        ///   A <see cref="T:System.Func{TSource,TResult}" /> which will be
        ///   invoked with <paramref name="self" /> as a parameter.
        /// </param>
        /// <summary>
        ///   Supports chaining otherwise temporary values.
        /// </summary>
        /// <returns>
        ///   The value of type <typeparamref name="TResult" /> returned by
        ///   <paramref name="selector" />.
        /// </returns>
        /// <remarks>
        ///   <para>
        ///     <c>With</c> is useful for easily using an intermediate value within
        ///     an expression "chain" without requiring an explicit variable
        ///     declaration (which is useful for reducing in-scope variables, as no
        ///     variable is explicitly declared).
        ///   </para>
        ///   <code lang="C#" src="../../example.cs#With" />
        /// </remarks>
        /// <exception cref="T:System.ArgumentNullException">
        ///   <paramref name="selector" /> is <see langword="null" />.
        /// </exception>
        public static TResult With<TSource, TResult>(
                this TSource self, 
                Func<TSource, TResult> selector)
            if (selector == null)
                throw new ArgumentNullException ("selector");
            return selector (self);

(As an aside, notice that our file ballooned from 14 lines to 45 lines because of all the documentation. This is why I prefer to keep my documentation external to the source code, as it really bloats the source. Certainly, the IDE can hide comments, but I find that this defeats the purpose of having comments in the first place.)

Compile it into an assembly (use csc if running on Windows), specifying the /doc parameter to extract XML documentation comments:

$ gmcs /t:library /out:Demo.dll /doc:Demo.xml demo.cs

Update our documentation repository, but import Demo.xml:

$ mdoc update -o Documentation/en -i Demo.xml Demo.dll --exceptions=added
Updating: Cadenza.ObjectCoda
Members Added: 0, Members Deleted: 0

(No members were added or deleted as we're only changing the documentation, and didn't add any types or members to the assembly.)

Now when we view ObjectCoda.xml, we can see the documentation that was present in the source code.

However, notice one other change. In the documentation we wrote, we had:

        ///   <code lang="C#" src="../../example.cs#With" />

Yet, within ObjectCoda.xml, we have:

          <code lang="C#" src="../../example.cs#With">Console.WriteLine(
    args.OrderBy(v => v)
    .With(c => c.ElementAt (c.Count()/2)));

What's going on here? What's going on is that mdoc will search for all <code/> elements. If they contain a //code/@src attribute, the specified file is read in and inserted as the //code element's value. The filename specified in the //code/@src attribute is relative to the documentation repository root. A further extension is that, for C# code, if the filename has an "anchor", a #region block of the same name is searched for within the source code.

The ../../example.cs file referenced in the //code/@src value has the contents:

using System;
using System.Linq;
using Cadenza;

class Demo {
    public static void Main (string[] args)
        #region With
            args.OrderBy(v => v)
            .With(c => c.ElementAt (c.Count()/2)));

This makes keeping documentation examples actually compiling trivial to support. For example, I'll have documentation refer to my unit tests, e.g.

<code lang="C#" src="../../Test/Cadenza/ObjectTest.cs#With" />

One final point worth mentioning: you can import documentation as often as you want. The imported documentation will always overwrite whatever is already present within the documentation repository. Consequently, if you want to use mdoc for display purposes but want to continue using inline XML documentation, always import the compiler-generated .xml file.

Now, we can update our HTML documentation:

$ mdoc export-html -o html Documentation/en

The current Demo.dll documentation.

Next time, we'll cover customizing the static HTML output.

Posted on 10 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Using mdoc

As mentioned last time, mdoc is an assembly-based documentation management system. Thus, before you can use mdoc you need an assembly to document. Let's write some C# source:

using System;

namespace Cadenza {
    public static class ObjectCoda {
        public static TResult With<TSource, TResult>(
                this TSource self, 
                Func<TSource, TResult> selector)
            if (selector == null)
                throw new ArgumentNullException ("selector");
            return selector (self);

Compile it into an assembly (use csc if running on Windows):

$ gmcs /t:library /out:Demo.dll demo.cs

Now that we have an assembly, we can create the mdoc repository for the Demo.dll assembly, which will contain documentation stubs for all publically visible types and members in the assembly:

$ mdoc update -o Documentation/en Demo.dll --exceptions=added
New Type: Cadenza.ObjectCoda
Member Added: public static TResult With<TSource,TResult> (this TSource self, Func<TSource,TResult> selector);
Namespace Directory Created: Cadenza
New Namespace File: Cadenza
Members Added: 1, Members Deleted: 0

mdoc update is the command for for synchronizing the documentation repository with the assembly; it can be run multiple times. The -o option specifies where to write the documentation repository. Demo.dll is the assembly to process; any number of assemblies can be specified. The --exceptions argument analyzes the IL to statically determine which exception types can be generated from a member. (It is not without some limitations; see the "--exceptions" documentation section.) The added argument to --exceptions tells mdoc to add <exception/> elements only for types and members that have been added to the repository, not to all types and members in the assembly. This is useful for when you've removed <exception/> documentation and don't want mdoc to re-add them.

We choose Documentation/en as the documentation repository location so that we can easily support localizing the documentation into multiple languages: each directory underneath Documentation would be named after an ISO 639-1 code, e.g. en is for English. This is only a convention, and is not required; any directory name can be used.

Notice that, since mdoc is processing assemblies, it will be able to work with any language that can generate assemblies, such as Visual Basic.NET and F#. It does not require specialized support for each language.

Now we have a documentation repository containing XML files; a particularly relevant file is ObjectCoda.xml, which contains the documentation stubs for our added type. I won't show the output here, but if you view it there are three important things to note:

  1. The XML is full of type information, e.g. the /Type/Members/Member/Parameters/Parameter/@Type attribute value.
  2. The XML contains additional non-documentation information, such as the //AssemblyVersion elements. This will be discussed in a future blog posting.
  3. The //Docs elements are a container for the usual C# XML documentation elements.

Of course, a documentation repository isn't very useful on it's own. We want to view it! mdoc provides three ways to view documentation:

  1. mdoc export-html: This command generates a set of static HTML files for all types and members found within the documentation repository.
  2. mdoc assemble: This command "assembles" the documentation repository into a .zip and .tree file for use with the monodoc Documentation browser and the ASP.NET front-end (which powers
  3. mdoc export-msxdoc: This generates the "traditional" XML file which contains only member documentation. This is for use with IDEs like Visual Studio, so that the IDE can show summary documentation while editing.

We will cover mdoc assemble and mdoc export-msxdoc in future installments. For now, to generate static HTML:

$ mdoc export-html -o html Documentation/en

The current Demo.dll documentation.

Next time we will cover how to write actual documentation instead of just documentation stubs.

Posted on 09 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

What is mdoc?

mdoc is an assembly-based documentation management system, which recently added support for .NET .

I say "assembly based" because an alternative is source-based, which is what "normal" C# XML documentation, JavaDoc, and perlpod provide. Unlike these source-based systems, in mdoc documentation for public types and members are not present within source code. Instead, documentation is stored externally (to the source), in a directory of XML files (hereafter refered to as the mdoc repository).

Furthermore, mdoc provides commands to:

Why the mdoc repository?

Why have a directory of XML files as the mdoc repository? The mdoc repository comes from the need to satisfy two goals:

  1. The compiler-generated /doc XML contains no type information.
  2. Having types is very useful for HTML output/etc., so the type information must come from somewhere.

Said "somewhere" could be the actual assemblies being documented, but this has other downsides (e.g. it would complicate supporting different versions of the same assembly). mdoc uses the repository to contain both documentation and full type information, so that the source assemblies are only needed to update the repository (and nothing else).

Why use mdoc?

Which provides enough background to get to the point: why use mdoc?

You would primarily want to use mdoc if you want to view your documentation outside of an IDE, e.g. within a web browser or stand-alone documentation browser. Most mdoc functionality is geared toward making documentation viewable (e.g. mdoc export-html and mdoc assemble), and making the documentation that is viewed more useful (such as the full type information provided by mdoc update and the generation of <exception/> elements for documentation provided by mdoc update --exceptions).

Next time, we'll discuss how to use mdoc.

Posted on 08 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Re-Introducing mdoc

Many moons ago, Jon Skeet announced Noda Time. In it he asked:

How should documentation be created and distributed?

Thus I pondered, "how well does mdoc support Windows users?"

The answer: not very well, particularly in an interop scenario.

So, lots of bugfixing and a false-start later, and I'd like to announce mdoc for .NET. All the power of mdoc, cross-platform.

Note that these changes did not make it into Mono 2.6, and won't be part of a formal Mono release until Mono 2.8. Consequently, if you want to run things under .NET, you should use the above ZIP archive. (You can, of course, install Mono on Windows and then use mdoc via Mono, you just won't be able to run mdoc under .NET.)

The changes made since Mono 2.6 include:

Next time, we'll cover what mdoc is, and why you'd want to use it.

Posted on 07 Jan 2010 | Path: /development/mono/mdoc/ | Permalink

Linq to SQL on Mono 2.6: NerdDinner on Mono

NerdDinner is an ASP.NET MVC sample, licensed under the Ms-PL with sources hosted at CodePlex.

Back on May 14th, I wrote that NerdDinner could be run under Mono using trunk.

Now, I'm pleased to note that the just-released Mono 2.6 includes these changes. Furthermore, thanks to ankit's progress on xbuild, installation and setup is easier than before:

  1. Build (or otherwise obtain) Mono 2.6. The Parallel Mono Environments page may be helpful.
  2. Download the NerdDinner 1.0 sources through a web browser. (curl or wget won't work.)
  3. Extract the NerdDinner sources:
    $ mkdir -p $HOME/tmp
    $ cd $HOME/tmp
    $ unzip "/path/to/NerdDinner"
  4. Build NerdDinner 1.0:
    $ cd "$HOME/tmp/NerdDinner 1.0"
    $ xbuild NerdDinner/NerdDinner.csproj
    (Unfortunately we can't build just run xbuild (or build NerdDinner.sln) as this requires access to the MSTest assemblies used by the NerdDinner unit tests, which aren't currently present on Mono.)
  5. Only the web portion runs under Mono, as does the data access layer (System.Data.Linq, more affectionately known as Linq to SQL). The database is still Microsoft SQL Server. Go forth and configure the NerdDinner server (if you don't already have one configured).
  6. Back on the Linux side of things, edit $HOME/tmp/NerdDinner 1.0/NerdDinner/ConnectionStrings.config, and change the NerdDinnerConnectionString connection string to:
    <add name="NerdDinnerConnectionString"
        connectionString="Data Source=gourry\SQLEXPRESS;
        Initial Catalog=NerdDinner;
        User ID=gourry\jonp;
        Integrated Security=true"/>
    You will need to adjust the machine name in the Data Source parameter to contain your actual computer name, and change the User ID and Password to whatever values you chose when configuring SQL Server.
  7. Configure a MembershipProvider for NerdDinner username/password storage.
  8. Run the web app:
    $ cd "$HOME/tmp/NerdDinner 1.0/NerdDinner"
    $ MONO_IOMAP=all xsp2
    The MONO_IOMAP environment variable is needed because some link targets used within NerdDinner require case insensitivity.

Some things worth noting since May. First, openSUSE has released openSUSE 11.2, which is apparently more stringent than 11.1. Consequently, you may need to open the firewall so that port 8080 is accessible. You can do this by:

  1. Opening YaST.
  2. Starting the Firewall applet.
  3. In the Allowed Services area, add the HTTP Server and Mono XSP2 ASP.NET Host Service services.
  4. Click Next, then Finish.

One other oddity I encountered is that a url of http://localhost:8080 isn't permitted; using telnet(1) shows that it attempts to connect to ::1... (i.e. a IPv6 address), and the connection is refused. Instead, I needed to connect to

NerdDinnner on Linux!

Posted on 15 Dec 2009 | Path: /development/mono/ | Permalink

Mono.Data.Sqlite & System.Data in MonoTouch 1.2 [Preview]

One of the new features that will be present in MonoTouch 1.2 is inclusion of the System.Data and Mono.Data.Sqlite assemblies. This is a preview release of System.Data et. al; it may not fully work. Known limitations are at the end of this post.

What Does This Mean?

It means that the following assemblies will be included in MonoTouch 1.2, and thus usable by MonoTouch applications:



using System;
using System.Data;
using System.IO;
using Mono.Data.Sqlite;

class Demo {
    static void Main (string [] args)
        var connection = GetConnection ();
        using (var cmd = connection.CreateCommand ()) {
            connection.Open ();
            cmd.CommandText = "SELECT * FROM People";
            using (var reader = cmd.ExecuteReader ()) {
                while (reader.Read ()) {
                    Console.Error.Write ("(Row ");
                    Write (reader, 0);
                    for (int i = 1; i < reader.FieldCount; ++i) {
                        Console.Error.Write(" ");
                        Write (reader, i);
            connection.Close ();

    static SqliteConnection GetConnection()
        var documents = Environment.GetFolderPath (
        string db = Path.Combine (documents, "mydb.db3");
        bool exists = File.Exists (db);
        if (!exists)
            SqliteConnection.CreateFile (db);
        var conn = new SqliteConnection("Data Source=" + db);
        if (!exists) {
            var commands = new[] {
                "CREATE TABLE People (PersonID INTEGER NOT NULL, FirstName ntext, LastName ntext)",
                "INSERT INTO People (PersonID, FirstName, LastName) VALUES (1, 'First', 'Last')",
                "INSERT INTO People (PersonID, FirstName, LastName) VALUES (2, 'Dewey', 'Cheatem')",
                "INSERT INTO People (PersonID, FirstName, LastName) VALUES (3, 'And', 'How')",
            foreach (var cmd in commands)
                using (var c = conn.CreateCommand()) {
                    c.CommandText = cmd;
                    c.CommandType = CommandType.Text;
                    conn.Open ();
                    c.ExecuteNonQuery ();
                    conn.Close ();
        return conn;

    static void Write(SqliteDataReader reader, int index)
        Console.Error.Write("({0} '{1}')", 
                reader [index]);

The above code creates the Documents/mydb.db3 SQLite database, populates it if it doesn't already exist, then executes a SQL query against the database using normal, standard, ADO.NET mechanisms.

What's Missing?

Functionality is missing from System.Data.dll and Mono.Data.Sqlite.dll.

Functionality missing from System.Data.dll consists of:

Meanwhile, Mono.Data.Sqlite.dll suffered no source code changes, but instead may be host to a number of runtime issues (the primary reason this is a preview release). Mono.Data.Sqlite.dll binds SQLite 3.5. iPhoneOS, meanwhile, ships with SQLite 3.0. Suffice it to say, some things have changed between the two versions. ;-)

Thus, the real question is this: what's missing in SQLite 3.0? The following functions are used by Mono.Data.Sqlite.dll but are missing from iPhoneOS's SQLite:

Where are these functions used (i.e. what can't you use from Mono.Data.Sqlite)? These appear to be related to database schema querying, e.g. determining at runtime which columns exist on a given table, such as Mono.Data.Sqlite.SqliteConnection.GetSchema (overriding DbConnection.GetSchema) and Mono.Data.Sqlite.SqliteDataReader.GetSchemaTable (overriding DbDataReader.GetSchemaTable). In short, it seems that anything using DataTable is unlikely to work.

Why Provide Mono.Data.Sqlite?

Why not? We realize that there are pre-existing SQLite solutions, but felt that many people would prefer to use the ADO.NET code they're already familiar with. Bringing System.Data and Mono.Data.Sqlite to MonoTouch permits this.

What About Data Binding?

Data binding with e.g. a UITableView is not currently implemented.


I suck at conclusions. :-)

Hope you enjoy this preview!

Posted on 21 Oct 2009 | Path: /development/mono/MonoTouch/ | Permalink

Linq to SQL on Mono Update: NerdDinner on Mono

NerdDinner is an ASP.NET MVC sample, licensed under the Ms-PL with sources hosted at CodePlex.

It is now possible to run the web portions of NerdDinner 1.0 on Linux with Mono trunk, thanks to Marek Habersack and Gonzalo Paniagua Javier's help with Mono's ASP.NET and ASP.NET MVC support, and the DbLinq community's assistance with Linq to SQL support.

This shows a growing level of maturity within Mono's Linq to SQL implementation.

  1. Build Mono from trunk. The Parallel Mono Environments page may be helpful.
  2. Download the NerdDinner 1.0 sources through a web browser. (curl or wget won't work.)
  3. Extract the NerdDinner sources:
    $ mkdir -p $HOME/tmp
    $ cd $HOME/tmp
    $ unzip "/path/to/NerdDinner"
  4. Build NerdDinner 1.0:
    $ cd "$HOME/tmp/NerdDinner 1.0/NerdDinner"
    $ mkdir bin
    $ gmcs -t:library -out:bin/NerdDinner.dll -debug+ -recurse:'*.cs' \
        -r:System -r:System.Configuration -r:System.Core \
        -r:System.Data -r:System.Data.Linq -r:System.Web \
        -r:System.Web.Abstractions -r:System.Web.Mvc \
  5. As mentioned in the introduction, only the web portion runs under Mono, as does the data access layer (System.Data.Linq, more affectionately known as Linq to SQL). The database is still Microsoft SQL Server. Find yourself a Windows machine, install SQL Server 2008 (Express is fine), and perform the following bits of configuration:
    1. Create the database files:
      1. Copy the NerdDinner_log.ldf and NerdDinner.mdf files from the $HOME/tmp/NerdDinner 1.0/NerdDinner/App_Data directory to your Windows machine, e.g. C:\tmp.
      2. Within Windows Explorer, go to C:\tmp, right-click the C:\tmp folder, click Properties, click the Security tab, click Edit..., and add the Full Control, MOdify, Read & execute, List folder contents, Read, and Write permissions to the User group. Click OK.
      3. Repeat the above permissions modifications for the NerdDinner_log.ldf and NerdDinner.mdf files in C:\tmp.
    2. Add the NerdDinner database files to Microsoft SQL Server:
      1. Start Microsoft SQL Server Management Studio (Start → All Programs → Microsoft SQL Server 2008 → SQL Server Management Studio).
      2. Connect to your database instance.
      3. Within the Object Explorer (View → Object Explorer), right-click the database name and click Attach....
      4. Within the Attach Databases dialog, click the Add... button, and choose C:\tmp\NerdDinner.mdf in the Locate Database Files dialog. Click OK in both the Locate Database Files dialog and the Attach Databases dialog.
    3. Enable mixed-mode authentication:
      1. Start Microsoft SQL Server Management Studio.
      2. Connect to your database instance.
      3. Within the Object Explorer, right-click the database name and click Properties.
      4. In the Server Properties dialog, select the Security page.
      5. In the Server authentication section, select the SQL Server and Windows Authentication mode radio button.
      6. Click OK.
      7. Restart SQL Server by right-clicking on the database name and clicking Restart.
    4. Add a SQL Server user:
      1. Within SQL Server Management Studio, connect to the database instance.
      2. Within the Object Explorer, expand the SecurityLogins tree node.
      3. Right-click the Logins node, and click New Login....
      4. In the Login - New dialog, enter a login name. We'll use jonp for discussion purposes. Select the SQL Server authentication dialog button, and enter a password in the Password and Conform Password text boxes. For discussion purposes we'll use 123456.
      5. Still within the Login - New dialog, select the User Mapping page. In the Users mapped to this login section, select the checkbox in the Map column corresponding to the NerdDinner database. Within the Database role membership for: NerdDinner section, select the db_datareader and db_datawriter roles. Click OK.
    5. Enable remote access to SQL Server (see also):
      1. Configure SQL Server:
        1. Start SQL Server Configuration Manager (Start → All Programs → Microsoft SQL Server 2008 → Configuration Tools → SQL Server Configuration Manager).
        2. In the left-hand pane, select the SQL Server Configuration manager (Local) → SQL Server Network Configuration → Protocols for Database Instance Name node.
        3. In the right pane, double click the TCP/IP Protocol Name.
        4. In the Protocol tab, set Enabled to Yes. Click OK.
        5. In the left-hand pane, go to the SQL Server Configuration Manager (Local) → SQL Server Services node.
        6. In the right pane, double-click SQL Server Browser.
        7. In the Service tab, set the Start Mode property to Automatic. Click OK.
        8. Right-click SQL Server Browser, and click Start.
        9. Right-click SQL Server, and click Restart.
      2. Configure Windows Firewall
        1. Within Windows Control Panel, open the Windows Firewall applet.
        2. Click the Allow a program through Windows Firewall link.
        3. In the Windows Firewall Settings dialog, click the Exceptions tab.
        4. Click Add program..., and add the following programs:
          • sqlbrowser.exe (C:\Program Files\Microsoft SQL Server\90\Shared\sqlbrowser.exe)
          • sqlservr.exe (C:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Binn\sqlservr.exe)
        5. Click OK.
  6. Back on the Linux side of things, edit $HOME/tmp/NerdDinner 1.0/NerdDinner/ConnectionStrings.config, and change the NerdDinnerConnectionString connection string to:
    <add name="NerdDinnerConnectionString"
        connectionString="Data Source=gourry\SQLEXPRESS;Initial Catalog=NerdDinner;User ID=jonp;Password=123456;"/>
    You will need to adjust the machine name in the Data Source parameter to contain your actual computer name, and change the User ID and Password to whatever values you chose in §5.E.iv.
  7. NerdDinner makes use of ASP.NET's MembershipProvider functionality, so a SQLite database needs to be created to contain the username and password information for the NerdDinner site. This is detailed at the ASP.NET FAQ and Guide: Porting ASP.NET Applications pages:
    $ cd "$HOME/tmp/NerdDinner 1.0/NerdDinner/App_Data
    # Create the commands needed to configure the SQLite database:
    $ cat > aspnetdb.sql <<EOF
    CREATE TABLE Users (
     pId                                     character(36)           NOT NULL,
     Username                                character varying(255)  NOT NULL,
     ApplicationName                         character varying(255)  NOT NULL,
     Email                                   character varying(128)  NOT NULL,
     Comment                                 character varying(128)  NULL,
     Password                                character varying(255)  NOT NULL,
     PasswordQuestion                        character varying(255)  NULL,
     PasswordAnswer                          character varying(255)  NULL,
     IsApproved                              boolean                 NULL, 
     LastActivityDate                        timestamptz             NULL,
     LastLoginDate                           timestamptz             NULL,
     LastPasswordChangedDate                 timestamptz             NULL,
     CreationDate                            timestamptz             NULL, 
     IsOnLine                                boolean                 NULL,
     IsLockedOut                             boolean                 NULL,
     LastLockedOutDate                       timestamptz             NULL,
     FailedPasswordAttemptCount              integer                 NULL,
     FailedPasswordAttemptWindowStart        timestamptz             NULL,
     FailedPasswordAnswerAttemptCount        integer                 NULL,
     FailedPasswordAnswerAttemptWindowStart  timestamptz             NULL,
     CONSTRAINT users_pkey PRIMARY KEY (pId),
     CONSTRAINT users_username_application_unique UNIQUE (Username, ApplicationName)
    CREATE INDEX users_email_index ON Users (Email);
    CREATE INDEX users_islockedout_index ON Users (IsLockedOut);
    CREATE TABLE Roles (
     Rolename                                character varying(255)  NOT NULL,
     ApplicationName                         character varying(255)  NOT NULL,
     CONSTRAINT roles_pkey PRIMARY KEY (Rolename, ApplicationName)
    CREATE TABLE UsersInRoles (
     Username                                character varying(255)  NOT NULL,
     Rolename                                character varying(255)  NOT NULL,
     ApplicationName                         character varying(255)  NOT NULL,
     CONSTRAINT usersinroles_pkey PRIMARY KEY (Username, Rolename, ApplicationName),
     CONSTRAINT usersinroles_username_fkey FOREIGN KEY (Username, ApplicationName) REFERENCES Users (Username, ApplicationName) ON DELETE CASCADE,
     CONSTRAINT usersinroles_rolename_fkey FOREIGN KEY (Rolename, ApplicationName) REFERENCES Roles (Rolename, ApplicationName) ON DELETE CASCADE
    CREATE TABLE Profiles (
     pId                                     character(36)           NOT NULL,
     Username                                character varying(255)  NOT NULL,
     ApplicationName                         character varying(255)  NOT NULL,
     IsAnonymous                             boolean                 NULL,
     LastActivityDate                        timestamptz             NULL,
     LastUpdatedDate                         timestamptz             NULL,
     CONSTRAINT profiles_pkey PRIMARY KEY (pId),
     CONSTRAINT profiles_username_application_unique UNIQUE (Username, ApplicationName),
     CONSTRAINT profiles_username_fkey FOREIGN KEY (Username, ApplicationName) REFERENCES Users (Username, ApplicationName) ON DELETE CASCADE
    CREATE INDEX profiles_isanonymous_index ON Profiles (IsAnonymous);
    CREATE TABLE ProfileData (
     pId                                     character(36)           NOT NULL,
     Profile                                 character(36)           NOT NULL,
     Name                                    character varying(255)  NOT NULL,
     ValueString                             text                    NULL,
     ValueBinary                             bytea                   NULL,
     CONSTRAINT profiledata_pkey PRIMARY KEY (pId),
     CONSTRAINT profiledata_profile_name_unique UNIQUE (Profile, Name),
     CONSTRAINT profiledata_profile_fkey FOREIGN KEY (Profile) REFERENCES Profiles (pId) ON DELETE CASCADE
    # Create the SQLite database:
    $ sqlite3 aspnetdb.sqlite
    sqlite> .read aspnetdb.sql
    sqlite> .quit
  8. Run the web app:
    $ MONO_IOMAP=all xsp2
    The MONO_IOMAP environment variable is needed because some link targets used within NerdDinner require case insensitivity.

NerdDinnner on Linux!

Posted on 14 May 2009 | Path: /development/mono/ | Permalink

Mono 2.4 and mdoc-update

Mono 2.4 was released, and among the unlisted changes was that mdoc-update has migrated from using Reflection to using Mono.Cecil.

There are multiple advantages and disadvantages to this migration. The disadvantages include slower execution (when I tested, Mono.Cecil took ~10% longer to do the same task as Reflection) and increased dependencies (Mono.Cecil is now required).

I believe that these disadvantages are outweighed by the advantages. Firstly, the migration makes my life significantly easier. One of the major limitations of Reflection is that only one mscorlib.dll can be loaded into a process. This means that, in order to support generating documentation from mscorlib.dll 1.0, there needs to be a version of mdoc-update that runs under .NET 1.0. Similarly, to document mscorlib.dll 2.0, I need a different version of mdoc-update which runs under .NET 2.0. And when .NET 4.0 is released (with yet another version of mscorlib.dll), I'll need...yet another version of mdoc-update to run under .NET 4.0. This is less than ideal, and using Mono.Cecil allows me to have one program which supports every version of mscorlib.dll.

This also means that I can use C# 3.0 features within mdoc-update, as I no longer need to ensure that (most of) mdoc-update can run under the .NET 1.0 profile.

Most people won't care about making my life easier, but I do. ;-)

For everyone else, the most important result of the Mono.Cecil migration is that mdoc-update now has a suitable base for advanced documentation generation scenarios which make use of IL analysis. The first feature making use of it is new --exceptions functionality, which analyzes member IL to determine which exceptions could be generated, and creates stub <exception/> XML documentation based on that analysis. This feature is experimental (see the documentation), and contains a number of corner cases, but I've already found it useful for writing Mono.Rocks documentation.

Posted on 31 Mar 2009 | Path: /development/mono/ | Permalink

DbLinq and Mono

.NET 3.5 introduced Language Integrated Query (LINQ), which allowed for querying groupings of data across diverse "paradigms" -- collections (arrays, lists, etc.), XML, and relational data, called LINQ to SQL. LINQ to SQL support is within the System.Data.Linq assembly, which is one of the assemblies Mono is currently implementing.

However, LINQ to SQL has one limitation: it only works with Microsoft SQL Server and Microsoft SQL Server Compact Edition, leaving numerous other databases users unable to use this assembly.

Enter DbLinq, an effort to provide LINQ to SQL functionality for other databases, including Firebird, Ingres, MySQL, Oracle, PostgreSql, SQLite, and SQL Server. DbLinq provides a System.Data.Linq-compatible implementation for these databases (compatible implying the same types and methods, but located within a different namespace).

Which brings us to Mono. Mono is using DbLinq as the foundation for Mono's System.Data.Linq.dll implementation, allowing Mono's System.Data.Linq.dll to support all the databases that DbLinq supports. Mono also has sqlmetal (based on DbLinq's DbMetal.exe sources), which can be used to generate C# types to interact with databases.

DbLinq On Mono

MonoDevelop can load the DbLinq solutions. However, it has a problem with building all of the projects within the solution. At the time of this writing, MonoDevelop can build the following assemblies: DbLinq.dll, DbLinq.Sqlite_test_mono_strict.dll, DbLinq.SqlServer.dll, DbLinq.SqlServer_test.dll, DbLinq.SqlServer_test_ndb.dll, DbLinq.SqlServer_test_strict.dll, and DbLinq_test_ndb_strict.dll. The *_test* assemblies are unit tests, so this leaves the core DbLinq.dll assembly and SQL Server support.

Thus, DbLinq is usually built with Visual Studio.NET (the free Visual Studio Express can be used). Once built, you can run some of the unit tests under Mono:

cd $path_to_dblinq2007_checkout/build.dbg

# Core tests
$ for test in DbLinq_test.dll DbLinq_test_ndb_strict.dll DbMetal_test.dll ; do \
  nunit-console2 $test \
# Verbose output omitted

# SQLite tests
$ nunit-console2 DbLinq.Sqlite_test_mono.dll
# Verbose output omitted

# Plus many tests for the other providers...

Most of the tests require an accessible database, so I've been limiting my current tests to SQLite (as setup is easier).

DbLinq In Mono

As mentioned before, DbLinq is being used to implement Mono's System.Data.Linq.dll. (For those reading the DbLinq source, the Mono-specific bits are within MONO_STRICT conditional code.) This allows us to write code that depends only on .NET assemblies (though this is of dubious value, as the mechanisms used to support SQLite and other databases won't work with .NET proper, but it's still a cute trick).

To play along, you'll need Mono trunk.

  1. Grab a SQLite database file to use with LINQ to SQL:
  2. Use sqlmetal to generate C# bindings for the database:
    sqlmetal /namespace:nwind /provider:Sqlite "/conn:Data Source=Northwind.db3" /code:nwind.cs
  3. Write some code to interact with the generated source code:
    // File: nwind-app.cs
    // Compile as: 
    //    gmcs nwind-app.cs nwind.cs -r:System.Data \
    //    	-r:System.Data.Linq -r:Mono.Data.Sqlite
    using System;
    using System.Data.Linq;
    using System.Linq;
    using Mono.Data.Sqlite;
    using nwind;
    class Test {
        public static void Main ()
            var conn = new SqliteConnection (
                    "DbLinqProvider=Sqlite;" + 
                    "Data Source=Northwind.db3"
            Main db = new Main (conn);
            var pens =
                from p in db.Products 
                where p.ProductName == "Pen"
                select p;
            foreach (var pen in pens) {
                Console.WriteLine ("     CategoryID: {0}",  pen.CategoryID);
                Console.WriteLine ("   Discontinued: {0}",  pen.Discontinued);
                Console.WriteLine ("      ProductID: {0}",  pen.ProductID);
                Console.WriteLine ("    ProductName: {0}",  pen.ProductName);
                Console.WriteLine ("QuantityPerUnit: {0}",  pen.QuantityPerUnit);
                Console.WriteLine ("   ReorderLevel: {0}",  pen.ReorderLevel);
                Console.WriteLine ("     SupplierID: {0}",  pen.SupplierID);
                Console.WriteLine ("      UnitPrice: {0}",  pen.UnitPrice);
                Console.WriteLine ("   UnitsInStock: {0}",  pen.UnitsInStock);
                Console.WriteLine ("   UnitsOnOrder: {0}",  pen.UnitsOnOrder);
  4. Compile:
    gmcs nwind-app.cs nwind.cs -r:System.Data -r:System.Data.Linq -r:Mono.Data.Sqlite
  5. Run:
    $ mono nwind-app.exe 
       Discontinued: False
          ProductID: 1
        ProductName: Pen
    QuantityPerUnit: 10
         SupplierID: 1
       UnitsInStock: 12
       UnitsOnOrder: 2

Notice that we use the database connection string to specify the database vendor to use, specifically the DbLinqProvider value specifies the database vendor, and must be present when connecting to a database other than Microsoft SQL Server (which is the default vendor).

If using the DataContext(string) constructor directly (and not through a generated subclass as used above), you should also provide the DbLinqConnectionType parameter, which is the assembly-qualified type name to use for the IDbConnection implementation. This allows you to use multiple different IDbConnection implementations that use similar SQL implementations, e.g. Mono.Data.Sqlite.dll and System.Data.SQLite.dll, both of which wrap the SQLite database.

Posted on 12 Mar 2009 | Path: /development/mono/ | Permalink

Extension Method Documentation

C# 3.0 adds a new language feature called extension methods. Extension methods allow the "addition" of new instance methods to any type, without modifying the type itself. This is extremely powerful, arguably crack-adled, and exists because Visual Studio users can't do anything without code completion (tongue firmly in cheek).

It's also extremely useful, permitting LINQ and the even more crack-adled thinking in Mono.Rocks (much of which I wrote, and I'm not entirely sure if the "crack" is in jest or not; sometimes I wonder...).

To create an extension method, you first create a static class. A method within the static class is an extension method if the first parameter's type has a this modifier:

static class MyExtensions {
    public static string Implode (this IEnumerable<string> self, string separator)
        return string.Join (separator, self.ToArray ());

Usage is as if it were a normal instance method:

string[] a        = {"This", "is", "my", "sentence."};
string   imploded = a.Implode (" ");
// imploded == "This is my sentence."

Extension methods are entirely syntactic sugar. (Nice syntactic sugar, nonetheless...). As such, it doesn't in any way modify the type that is being extended. Consequently, it cannot access private members, nor is the extension method returned when reflecting over the extended type. For example, typeof(string[]).GetMethod("Implode") will return null, as System.Array doesn't have an Implode method.

Furthermore, extension methods are only available if you have a using declaration for the namespace the extension method type resides in. So if the above MyExtensions type resides in the Example namespace, and a source file doesn't have using Example;, then the Implode extension method isn't available.

Earlier I alluded that Visual Studio users can't do anything without code completion. Extension methods are thus a boon, as they (potentially) make it easier to find new functionality, as no new types need to be introduced or known about in advance. However, you still need to have an appropriate using declaration to bring the methods "in scope," so how does a developer know what namespaces to use? The same way a developer knows which type to use for anything: documentation.

MSDN online documentation has been enhanced to show which extension methods are applicable for a given type, e.g. The extension methods for IEnumerable<T>. Mono has similar documentation support.

This isn't particularly interesting, though. Part of the utility and flexibility is that any type, in any namespace, can be extended with extension methods, and the extension methods themselves can be contained in any type.

Obviously, MSDN and Mono documentation online can't know about extension methods that are not part of the core framework. Thus, if the e.g. Mono.Cecil or Gendarme frameworks provided extension methods, the online documentation sites won't be helpful.

Which brings us to a Mono 2.0 feature (yes, I'm only now announcing a feature that shipped 3 months ago):

Mono Documentation Tools: the Mono Documentation framework has been upgraded to support documenting generics and extension methods.

This support consists of four things:

  1. Enhancing mdoc update to generate an /Overview/ExtensionMethods element within index.xml. The /Overview/ExtensionMethods element contains <ExtensionMethod/> elements which in turn contains //Targets/Target elements specifying which types the extension method is an instance method on, and a <Member/> element which is a subset of the actual extension method documentation. Developers don't need to edit this copy; it's handled entirely by mdoc update.
  2. Enhancing mdoc assemble to look for the //ExtensionMethod elements and insert them into the ExtensionMethods.xml file within the generated .zip file.
  3. Enhancing the XML documentation to HTML generation process so that the extension methods are listed. This allows all of monodoc and mod, online documentation, and mdoc export-html to use the underlying infrastructure.
  4. Enhance monodoc.dll to load all ExtensionMethods.xml files from all installed .zip files. This the allows monodoc and online documentation mechanisms to show extension methods for all installed documentation sources.

The short of it is that this requires no workflow change to get extension methods listed on all extended types. Just create extension methods, document them as if they were normal static methods (as they are normal static methods, and can be invoked as such), assemble the documentation, and install the documentation.

There is one wrinkle, though: since the index.xml file contains a subset of the <Member/> documentation, you need to rerun mdoc update after editing extension method documentation so that index.xml will have the correct documentation when mdoc assemble is run. Otherwise the "summary" extension method documentation may differ from the actual intended documentation. This may be improved in a future release.

Posted on 25 Jan 2009 | Path: /development/mono/ | Permalink

How To Defend Against Software Patent FUD

You don't.


Context: for years, Mono has been the target of FUD because of potential software patent issues. For years the Mono community has attempted to defend from these attack, sometimes successfully.

Recently, someone asked on mono-list about ways to pre-emptively answer the FUD so that it would become a non-issue. I responded, and had several people suggest that I blog it. Here we go.

To begin, there are several problems with defending against software patent FUD, starting with software patents themselves:

  1. Software patents suck.
  2. Software patents really suck. (Specifically, The "Don't Look" Problem section.)
  3. Software patents really, really suck. (Related)
  4. The anti-Mono FUDsters apparently can't see the forest for the trees.

I imagine that most people reading this will agree with the first three points, so it is the fourth that I will attempt to focus on.

Specifically, the anti-Mono FUDsters seem to spend so much time on a tree (Microsoft) that they either miss or minimize the forest of actual patent problems, patent trolls, etc.

So for once, I'll (non-seriously) throw the FUD:

A long time ago, Wang created a patent that "covered a method by which a program can get help from another computer application to complete a task." Microsoft licensed the patent from Wang. Sun did not. In 1997, Eastman Kodak Company bought Wang, thus acquiring this patent. Kodak then sued Sun, claiming that Java infringed this patent. Kodak won, and they later settled out of court.

Now, for my non-serious steaming pile of FUD, in the form of a question: Did Sun acquire the ability to sublicense these patents from Kodak? If Sun can sublicense the patents, then GPL'd Java is fine. If Sun can't, then Java cannot be GPL'd, and any company making use of Java could be subject to a lawsuit from Kodak.

(I would hope that this is yes, but I have no idea, and the lack of patent sub-licensing has come up before.)

So do we need to worry about Java? I have no idea. I mention it to raise a larger point:

It Doesn't Matter. Anyone can hold a patent, for anything, and sue anyone at any time. Thus, Gnome is not free of patent issues, KDE is not free of patent issues, Linux is not free of patent issues, Python is not free of patent issues, Ruby is not free of patent issues.... Nothing is free of patent issues.

(Consider: do you think that the Python Software Foundation has signed a patent license with Kodak? Has Red Hat? I doubt it. Furthermore, I find it hard to believe that something as flexible as Python wouldn't violate the aforementioned Wang patent, especially when you get into COM interop/etc. on Windows...)

Having said the above, a related question becomes: How do you avoid violating someone's patents? You don't (insert more laughter). You could try restricting yourself to only using software that's at least 20 years old, but you won't gain many users that way. It also won't work, for at least two reasons: (1) submarine patents -- not all patents that would have been in effect 20 years ago have necessarily expired (though submarine patents shouldn't exist for ~too much longer); and (2) look at the drug patent industry, where to prevent patented drugs from "going generic" the drug companies take the patent-expired drug(s), combine them with other drugs, then patent the result. I don't think it will take too long for Software companies to start doing this if they feel that it's necessary, and once they do, even using known-patent-expired programs won't be safe, as merely combining them together may be covered by an unexpired patent. Yay.

The only other way to avoid software patents is to perform a patent search, which is extremely tricky (as software patents are deliberately vague), and if you miss a patent and get sued over it, you're now liable for treble damages. You're almost always better to not look at software patents. (Isn't it funny how something that was supposed to "promote the Progress of Science and useful Arts" can't be used by those it's supposed to help? Isn't it hilarious?)

With all this in mind, you can see why patent FUD is hard to fight, because there's no way to dismiss it. Software patents are a reality, they're ugly, but they can't be avoided. (Yet they must be ignored, to avoid increased liability.) My problem is that the anti-Mono people only seem to focus on patents with respect to Mono and Microsoft, ignoring the rest of the software industry. They're ignoring the (gigantic) forest so that they can pay attention to a single tree, Microsoft.

What I find even "funnier" is that Microsoft supposedly holds a number of patents in a number of areas frequently used by open-source projects, such as HTML, CSS, C++, XML, and others. So why don't we ever see any suggestions to avoid these technologies because the Big Bad Microsoft might sue?

For that matter, (again) considering how vague software patents tend to be, wouldn't many Microsoft patents on .NET stand a chance at being applicable toward Java, Python, and other projects? (Again) Why just focus on Mono?

Final note: I am a Software Engineer, not a patent lawyer. Feel free to ignore the entire rant, but I would appreciate it if a little more thought went into all the anti-Mono propaganda.

Posted on 19 Jan 2009 | Path: /development/mono/ | Permalink

Unix Signal Handling In C#

In the beginning, Unix introduced signal(2), which permits a process to respond to external "stimuli", such as a keyboard interrupt (SIGINT), floating-point error (SIGFPE), dereferencing the NULL pointer (SIGSEGV), and other asynchronous events. And lo, it was...well, acceptable, really, but there wasn't anything better, so it at least worked. (Microsoft, when faced with the same problem of allowing processes to perform some custom action upon an external stimuli, invented Structured Exception Handling.)

Then, in a wrapping binge, I exposed it for use in C# with Stdlib.signal(), so that C# code could register signal handlers to be invoked when a signal occurred.

The problem? By their very nature, signals are asynchronous, so even in a single-threaded program, you had to be very careful about what you did, as your "normal" thread was certainly in the middle of doing something. For example, calling malloc(3) was almost certainly a bad idea, because if the process was in the middle of a malloc call already, you'd have a reentrant malloc call which could corrupt the heap.

This reentrant property impacts all functions in the process, including system calls. Consequently, a list of functions that were "safe" for invocation from signal handlers was standardized, and is listed in the above signal man page; it includes functions such as read(2) and write(2), but not functions like e.g. pwrite(2).

Consequently, these limitations and a few other factors led to the general recommendation that signal handlers should be as simple as possible, such as writing to global variable which the main program occasionally polls.

What's this have to do with Stdlib.signal(), and why was it a mistake to expose it? The problem is the P/Invoke mechanism, which allows marshaling C# delegates as a function pointer that can be invoked from native code. When the function pointer is invoked, the C# delegate is eventually executed.

However, before the C# delegate can be executed, a number of of steps needs to be done first:

  1. The first thing it does is to ensure the application domain for the thread where the signal handler executes actually matches the appdomain the delegate comes from, if it isn't it may need to set it and do several things that we can't guarantee are signal context safe...
  2. If the delegate is of an instance method we also need to retrieve the object reference, which may require taking locks...

In the same email, lupus suggests an alternate signal handling API that would be safe to use from managed code. Later, I provided a possible implementation. It amounts to treating the UnixSignal instance as a glorified global variable, so that it can be polled to see if the signal has been generated:

UnixSignal signal = new UnixSignal (Signum.SIGINT);
while (!signal.IsSet) {
  /* normal processing */

There is also an API to permit blocking the current thread until the signal has been emitted (which also accepts a timeout):

UnixSignal signal = new UnixSignal (Signum.SIGINT);
// Wait for SIGINT to be generated within 5 seconds
if (signal.WaitOne (5000, false)) {
    // SIGINT generated

Groups of signals may also be waited on:

UnixSignal[] signals = new UnixSignal[]{
    new UnixSignal (Signum.SIGINT),
    new UnixSignal (Signum.SIGTERM),

// block until a SIGINT or SIGTERM signal is generated.
int which = UnixSignal.WaitAny (signals, -1);

Console.WriteLine ("Got a {0} signal!", signals [which].Signum);

This isn't as powerful as the current Stdlib.signal() mechanism, but it is safe to use, doesn't lead to potentially ill-defined or unwanted behavior, and is the best that we can readily provide for use by managed code.

Mono.Unix.UnixSignal is now in svn-HEAD and the mono-1-9 branch, and should be part of the next Mono release.

Posted on 08 Feb 2008 | Path: /development/mono/ | Permalink

Mono and Mixed Mode Assembly Support

An occasional question on and is whether Mono will support mixed-mode assemblies, as generated by Microsoft's Managed Extensions for C++ compiler (Visual Studio 2001, 2003), and C++/CLI (Visual Studio 2005, 2008).

The answer is no, and mixed mode assemblies will likely never be supported.


First, what's a mixed mode assembly? A mixed mode assembly is an assembly that contains both managed (CIL) and unmanaged (machine language) code. Consequently, they are not portable to other CPU instruction sets, just like normal C and C++ programs and libraries.

Next, why use them? The primary purpose for mixed mode assemblies is as "glue", to e.g. use a C++ library class as a base class of a managed class. This allows the managed class to extend unmanaged methods, allowing the managed code to be polymorphic with respect to existing unmanaged functions. This is extremely useful in many contexts. However, as something like this involves extending a C++ class, it requires that the compiler know all about the C++ compiler ABI (name mangling, virtual function table generation and placement, exception behavior), and thus effectively requires native code. If the base class is within a separate .dll, this will also require that the mixed mode assembly list the native .dll as a dependency, so that the native library is also loaded when the assembly is loaded.

The other thing that mixed mode assemblies support is the ability to export new C functions so that other programs can LoadLibrary() the assembly and GetProcAddress the exported C function.

Both of these capabilities require that the shared library loader for the platform support Portable Executable (PE) files, as assemblies are PE files. If the shared library loader supports PE files, then the loader can ensure that when the assembly is loaded, all listed dependent libraries are also loaded (case 1), or that native apps will be able to load the assembly as if it were a native DLL and resolve DLL entry points against it.

This requirement is met on Windows, which uses the PE file format for EXE and DLL files. This requirement is not met on Linux, which uses ELF, nor is it currently met on Mac OS X, which uses Mach-O.

So why can't mixed mode assemblies be easily supported in Mono? Because doesn't like PE.

The only workarounds for this would be to either extend assemblies so that ELF files can contain both managed and unmanaged code, or to extend the shared library loader to support the loading of PE files. Using ELF as an assembly format may be useful, but would restrict portability of such ELF-assemblies to only Mono/Linux; .NET could never make use of them, nor could Mono on Mac OS X. Similarly, extending the shared library loader to support PE could be done, but can it support loading both PE and ELF (or Mach-O) binaries into a single process? What happens if a PE file loaded into an "ELF" process requires KERNEL32.DLL? Extending the shared library loader isn't a panacea either.

This limitation makes mixed mode assemblies of dubious value. It is likely solvable, but there are for more important things for Mono to focus on.

Posted on 27 Jan 2008 | Path: /development/mono/ | Permalink

So you want to parse a command line...

If you develop command-line apps, parsing the command-line is a necessary evil (unless you write software so simple that it doesn't require any options to control its behavior). Consequently, I've written and used several parsing libraries, including Mono.GetOptions, Perl's Getopt::Long library, and some custom written libraries or helpers.

So what's wrong with them? The problem with Mono.GetOptions is that it has high code overhead: in order to parse a command line, you need a new type (which inherits from Mono.GetOptions.Options) and annotate each field or property within the type with an Option attribute, and let Mono.GetOptions map each command-line argument to a field/property within the Options subclass. See monodocer for an example; search for Opts to find the subclass.

The type-reflector parser is similarly code heavy, if only in a different way. The Mono.Fuse, lb, and omgwtf parsers are one-offs, either specific to a particular environment (e.g. integration with the FUSE native library) or not written with any eye toward reuse.

Which leaves Perl's Getopt::Long library, which I've used for a number of projects, and quite like. It's short, concise, requires no object overhead, and allows seeing at a glance all of the options supported by a program:

use Getopt::Long;
my $data    = "file.dat";
my $help    = undef;
my $verbose = 0;

GetOptions (
	"file=s"    => \$data,
	"v|verbose" => sub { ++$verbose; },
	"h|?|help"  => $help

The above may be somewhat cryptic at first, but it's short, concise, and lets you know at a glance that it takes three sets of arguments, one of which takes a required string parameter (the file option).

So, says I, what would it take to provide similar support in C#? With C# 3.0 collection initializers and lambda delegates, I can get something that feels rather similar to the above GetOpt::Long code:

string data = null;
bool help   = false;
int verbose = 0;

var p = new Options () {
	{ "file=",      (v) => data = v },
	{ "v|verbose",  (v) => { ++verbose } },
	{ "h|?|help",   (v) => help = v != null },
p.Parse (argv).ToArray ();

Options.cs has the goods, plus unit tests and additional examples (via the tests).

Options is both more and less flexible than Getopt::Long. It doesn't support providing references to variables, instead using a delegate to do all variable assignment. In this sense, Options is akin to Getopt::Long while requiring that all options use a sub callback (as the v|verbose option does above).

Options is more flexible in that it isn't restricted to just strings, integers, and floating point numbers. If there is a TypeConverter registered for your type (to perform string->object conversions), then any type can be used as an option value. To do so, merely declare that type within the callback:

int count = 0;

var p = new Options () {
	{ "c|count=", (int v) => count = v },

As additional crack, you can provide an (optional) description of the option so that Options can generate help text for you:

var p = new Options () {
	{ "really-long-option", "description", (v) => {} },
	{ "h|?|help", "print out this message and exit", (v) => {} },
p.WriteOptionDescriptions (Console.Out);

would generate the text:

      --really-long-option   description
  -h, -?, --help             print out this message and exit

Options currently supports:

All un-handled parameters are returned from the Options.Parse method, which is implemented as an iterator (hence the calls to .ToArray() in the above C# examples, to force processing).

Posted on 07 Jan 2008 | Path: /development/mono/ | Permalink

Re-Introducing monodocer

In the beginning... Mono was without documentation. Who needed it when Microsoft had freely available documentation online? (That's one of the nice things about re-implementing -- and trying to stay compatible with -- a pre-existing project: reduced documentation requirements. If you know C# under .NET, you can use C# under Mono, by and large, so just take an existing C# book and go on your way...)

That's not an ideal solution, as MSDN is/was slow. Very slow. Many seconds to load a single page slow. (And if you've ever read the .NET documentation on MSDN where it takes many page views just to get what you're after... You might forget what you're looking for before you find it.) A local documentation browser is useful.

Fortunately, the ECMA 335 standard comes to the rescue (somewhat): it includes documentation for the types and methods which were standardized under ECMA, and this documentation is freely available and re-usable.

The ECMA documentation consists of a single XML file (currently 7.2MB) containing all types and type members. This wasn't an ideal format for writing new documentation, so the file was split up into per-type files; this is what makes up the monodoc svn module (along with many documentation improvements since, particularly types and members that are not part of the ECMA standard.

However, this ECMA documentation import was last done many years ago, and the ECMA documentation has improved since then. (In particular, it now includes documentation for many types/members added in .NET 2.0.) We had no tools to import any updates.


Shortly after the ECMA documentation was originally split up into per-type files, Mono needed a way to generate documentation stubs for non-ECMA types within both .NET and Mono-specific assemblies. This was (apparently) updater.exe.

Eventually, Joshua Tauberer created monodocer, which both creates ECMA-style documentation stubs (in one file/type format) and can update documentation based on changes to an assembly (e.g. add a new type/member to an assembly and the documentation is updated to mention that new type/member).

By 2006, monodocer had (more-or-less) become the standard the generating and updating ECMA-style documentation, so when I needed to write Mono.Fuse documentation I used monodocer...and found it somewhat lacking in support for Generics. Thus begins my work on improving monodocer.

monodocer -importecmadoc

Fast-forward to earlier this year. Once monodocer could support generics, we could generate stubs for all .NET 2.0 types. Furthermore, ECMA had updated documentation for many core .NET 2.0 types, so...what would it take to get ECMA documentation re-imported?

This turned out to be fairly easy, with supported added in mid-May to import ECMA documentation via a -importecmadoc:FILENAME parameter. The problem was that this initial version was slow; quoting the ChangeLog, "WARNING: import is currently SLOW." How slow? ~4 Minutes to import documentation for System.Array.

This might not be too bad, except that there are 331 types in the ECMA documentation file, documenting 3797 members (fields, properties, events, methods, constructors, etc.). 4 minutes per type is phenominally slow.

Optimizing monodocer -importecmadoc

Why was it so slow? -importecmadoc support was originally modeled after -importslashdoc support, which is as follows: lookup every type and member in System.Reflection order, create an XPath expression for this member, and execute an XPath query against the documentation we're importing. If we get a match, import the found node.

The slowdown was twofold: (1) we loaded the entire ECMA documentation into a XmlDocument instance (XmlDocument is a DOM interface, and thus copies the entire file into memory), and (2) we were then accessing the XmlDocument randomly.

The first optimization is purely algorithmic: don't import documentation in System.Reflection order, import it in ECMA documentation order. This way, we read the ECMA documentation in a single pass, instead of randomly.

As is usually the case, algorithmic optimizations are the best kind: it cut down the single-type import from ~4 minutes to less than 20 seconds.

I felt that this was still too slow, as 20s * 331 types is nearly 2 hours for an import. (This is actually faulty reasoning, as much of that 20s time was to load the XmlDocument in the first place, which is paid for only once, not for each type.) So I set out to improve things further.

First was to use a XPathDocument to read the ECMA documentation. Since I wasn't editing the document, I didn't really need the DOM interface that XmlDocument provides, and some cursory tests showed that XPathDocument was much faster than XmlDocument for parsing the ECMA documentation (about twice as fast). This improved things, cutting single-type documentation import from ~15-20s to ~10-12s. Not great, but better.

Convinced that this still wasn't fast enough, I went to the only faster XML parser within .NET: XmlTextReader, which is a pull-parser lacking any XPath support. This got a single-file import down to ~7-8s.

I feared that this would still need ~45 minutes to import, but I was running out of ideas so I ran a full documentation import for mscorlib.dll to see what the actual runtime was. Result: ~2.5 minutes to import ECMA documentation for all types within mscorlib.dll. (Obviously the ~45 minute estimate was a little off. ;-)


Does this mean that we'll have full ECMA documentation imported for the next Mono release? Probably not. There are still a few issues with the documentation import where it skips members that ideally would be imported (for instance, documentation for System.Security.Permissions.FileIOPermissionAttribute.All isn't imported because Mono provides a get accessor while ECMA doesn't). The documentation also needs to be reviewed after import to ensure that the import was successful (a number of bugs have been found and fixed while working on these optimizations).

Hopefully it won't take me too long to get things imported...

Posted on 15 Jul 2007 | Path: /development/mono/ | Permalink

POSIX Says The Darndest Things

make check was reported to be failing earlier this week, and Mono.Posix was one of the problem areas:

1) MonoTests.Mono.Unix.UnixGroupTest.ListAllGroups_ToString : #TLAU_TS:
Exception listing local groups: System.IO.FileNotFoundException: Nie ma
takiego pliku ani katalogu ---> Mono.Unix.UnixIOException: Nie ma
takiego pliku ani katalogu [ENOENT].
  at Mono.Unix.UnixMarshal.ThrowExceptionForLastError () [0x00000] in
  at Mono.Unix.UnixGroupInfo.GetLocalGroups () [0x0001c] in
  at MonoTests.Mono.Unix.UnixGroupTest.ListAllGroups_ToString ()
[0x0000a] in
  at MonoTests.Mono.Unix.UnixGroupTest.ListAllGroups_ToString ()
[0x0003c] in
  at <0x00000> <unknown method>
  at (wrapper managed-to-native)
System.Reflection.MonoMethod:InternalInvoke (object,object[])
  at System.Reflection.MonoMethod.Invoke (System.Object obj,
BindingFlags invokeAttr, System.Reflection.Binder binder,
System.Object[] parameters, System.Globalization.CultureInfo culture)
[0x00040] in

Further investigation narrowed things down to Mono_Posix_Syscall_setgrent() in support/grp.c:

Mono_Posix_Syscall_setgrent (void)
	errno = 0;
	setgrent ();
	return errno == 0 ? 0 : -1;

I did this because setgrent(3) can fail, even though it has a void return type; quoting the man page:

Upon error, errno may be set. If one wants to check errno after the call, it should be set to zero before the call.

Seems reasonably straightforward, no? Clear errno, do the function call, and if errno is set, an error occurred.

Except that this isn't true. On Gentoo and Debian, calling setgrent(3) may set errno to ENOENT (no such file or directory), because setgrent(3) tries to open the file /etc/default/nss. Consequently, Mono.Unix.UnixGroupInfo.GetLocalGroups reported an error (as can be seen in the above stack trace).

Further discussion with some Debian maintainers brought forth the following detail: It's only an error if it's a documented error. So even though setgrent(3) set errno, it wasn't an error because ENOENT isn't one of the documented error values for setgrent(3).

"WTF!," says I.

So I dutifully go off and fix it, so that only documented errors result in an error:

Mono_Posix_Syscall_setgrent (void)
	errno = 0;
	do {
		setgrent ();
	} while (errno == EINTR);
	mph_return_if_val_in_list5(errno, EIO, EMFILE, ENFILE, ENOMEM, ERANGE);
	return 0;

...and then I go through the rest of the MonoPosixHelper code looking for other such erroneous use of errno and error reporting. There are several POSIX functions with void return types that are documented as generating no errors, and others are like setgrent(3) where they may generate an error.

It's unfortunate that POSIX has void functions that can trigger an error. It makes binding POSIX more complicated than it should be.

Posted on 29 Jun 2007 | Path: /development/mono/ | Permalink

Mono.Fuse 0.4.1

Now with MacFUSE support!

Mono.Fuse is a C# binding for FUSE. This is a minor update over the previous Mono.Fuse 0.4.0 release.

The highlight for this release is cursory MacFUSE support, which allows Mono.Fuse to work on Mac OS X. Unfortunately, it's not complete support, and I would appreciate any assistance in fixing the known issues (details below).


To use Mono.Fuse on Mac OS X, do the following:

  1. Download and install Mono or later. Other releases can be found at the Mono Project Downloads Page.
  2. Download and install MacFUSE 0.2.4 or later. Other releases can be found at the macfuse download page.
  3. Download, extract, and configure Mono.Fuse 0.4.1:
    1. curl > mono-fuse-0.4.1.tar.gz
    2. tar xzf mono-fuse-0.4.1.tar.gz
    3. cd mono-fuse-0.4.1.tar.gz
    4. PKG_CONFIG_PATH=/usr/local/lib/pkgconfig CFLAGS="-D__FreeBSD__=10 -O -g" ./configure --prefix=`pwd`/install-root
      • Note: PKG_CONFIG_PATH is needed so that fuse.pc will be found by pkg-config.
      • Note: CFLAGS is used as per the macfuse FAQ.
      • Note: You can choose any other --prefix you want.
    5. make
  4. Once Mono.Fuse has been built, you can run the sample programs as described in the README:
    1. cd example/HelloFS/
    2. mkdir t
    3. ./hellofs t &
    4. ls t
    5. cat t/hello

Known Issues

HelloFS works, but RedirectFS and RedirectFS-FH do not. Trying to execute them results in a SIGILL within Mono.Unix.Native.Syscall.pread when trying to read a file:

  1. cd example/RedirectFS
  2. mkdir t
  3. MONO_TRACE_LISTENER=Console.Out:+++ ./redirectfs t ~/ &
    • Note: MONO_TRACE_LISTENER set so that exception messages from Mono.Fuse.FileSystem will be printed to stdout. See the mono(1) man page for more information about MONO_TRACE_LISTENER.
  4. ls t        # works
  5. cat t/some-file-that-exists
    • Generates a SIGILL.

I would appreciate any assistance in fixing this issue.


Mono.Fuse 0.4.1 is available from It can built with Mono 1.1.13 and later. Apple Mac OS X support has only been tested with Mono

GIT Repository

The GIT repository for Mono.Fuse is at

Posted on 13 Apr 2007 | Path: /development/mono/ | Permalink

When Comparisons Fail

One of the unsung helper programs for Mono.Fuse and Mono.Unix is create-native-map ( man page), which takes an assembly, looks for DllImport-attributed methods, and generates C structure and function prototypes for those methods and related types. This allows e.g. the internal Mono.Posix.dll methods to be kept in sync with their implementation methods in MonoPosixHelper, checked by the compiler to ensure type consistency.

One of the "features" of create-native-map is support for integer overflow checking. For example, if you have a C# type:

[Map ("struct foo")]
struct Foo {
  public int member;

then create-native-map will generate the (excerpted) C code (it generates much more):

struct Foo {
  int member;

int ToFoo (struct foo *from, struct Foo *to)
  _cnm_return_val_if_overflow (int, from->member, -1);
  to->member = from->member;
	return 0;

This could be handy, as if the actual type of struct foo::member differed from int, we could tell at runtime if the value of from->member wouldn't fit within to->member. That was the hope, anyway. (Yes, this flexibility is required, as many Unix structures only standardize member name and type, but not necessarily the actual type. For example, struct stat::st_nlink is of type nlink_t, which will vary between platforms, but Mono.Unix.Native.Stat.st_nlink can't change between platforms, it needs to expose an ABI-agnostics interface for portability. Consequently, overflow checking is desirable when doing Statstruct stat conversions, and vice versa, to ensure that nothing is lost.)

The reality is that _cnm_return_val_if_overflow() was horribly buggy and broke if you looked at it wrong (i.e. it worked for me and would fail on many of the build machines running !Linux). Consequently _cnm_return_val_if_overflow() was converted into a no-op unless DEBUG is defined before/during the Mono 1.2.0 release.

Why discuss this now? Because Mono.Fuse 0.4.0 shipped with a broken version of create-native-map, which is the primary reason that it doesn't work with MacFUSE.

But because I'm a glutton-for-punishment/insane, I thought I'd take a look into making overflow checking work again (though it still won't be enabled unless DEBUG is defined). I wrote some tests, got them working on Linux, and tried to run them on Intel Mac OS X. The result: all but one worked. The reason it failed is inexplicable: a failing comparison. G_MININT64 can't be directly compared against 0:

$ cat ovf.c
# include <glib.h>
# include <limits.h>
# include <stdio.h>

int main ()
  long long v = G_MININT64;
  printf (" LLONG_MIN < 0? %i\n", (int) (LLONG_MIN < 0));
  printf ("G_MININT64 < 0? %i\n", (int) (G_MININT64 < 0));
  printf ("         v < 0? %i\n", (int) (v < 0));

$ gcc -o ovf ovf.c `pkg-config --cflags --libs glib-2.0`
$ ./ovf
 LLONG_MIN < 0? 1
G_MININT64 < 0? 0
         v < 0? 1

Now that's a w-t-f: G_MININT64 < 0 is FALSE. Simply bizarre...

Meanwhile, I should have a Mono.Fuse 0.4.1 release out "soon" to fix these problems, permitting Mono.Fuse to work properly with MacFUSE.

Posted on 12 Apr 2007 | Path: /development/mono/ | Permalink

Novell, Microsoft, & Patents

The news is out: hell has frozen over. Novell and Microsoft have announced a "patent cooperation agreement," whereby Microsoft won't sue Novell customers for patent infringement and Novell won't sue Microsoft customers for patent infringement.

I first heard about this on mono-list, and immediately replied with the obvious (to me) response.

Note: I am not a lawyer [0], so consequently everything I say is bunk, but I have been paying some attention to various lawsuits over the years.

That out of the way, take a step back and ignore Microsoft and Novell for the moment. Assume that you're a patent holder, and you decide that your patent has been infringed. Who do you sue? There are three possible defendants:

  1. Sue the developer. (Example: Stac Electronics vs. Microsoft.)
  2. Sue the distributor. This is frequently identical to (1) as the developer is the distributor, but the rise of Free and Open Source software introduces this distinction.
  3. Sue the customer of (1) and/or (2). The example I remembered hearing several years ago was Timeline vs. Microsoft [1].

The summary is this: software patents are evil, allowing virtually anyone to sue virtually everyone else. There are no assurances of safety anywhere. Software from large companies (Internet Explorer) can be sued as easily as software from a no-name company or the open-source community (see Eolas vs. Microsoft).

With that background out of the way, what does this Microsoft/Novell deal mean? It means exactly what they say: Novell won't sue Microsoft customers, and Microsoft won't sue Novell customers. Anyone else can still sue Microsoft, Novell, and their customers, so Novell and Microsoft customers really aren't any safer than they were before. Novell customers are a little safer -- the monster in the closet of a Microsoft lawsuit is no longer an issue -- but no one is completely safe. It just provides peace of mind, but it isn't -- and cannot -- be a complete "solution" to the threat of patent lawsuits. (The only real solution is the complete abolition of all software patents, which is highly unlikely.)

What does this mean for hobbyists who contribute to Mono, Samba, Wine, Linux, and other projects (like me)? It means I'm protected as part of this agreement, as my code is distributed as part of openSUSE. This also means that anyone other than Microsoft can sue me if I happen to violate a patent.

What about hobbyists whose code isn't part of openSUSE? Nothing has changed -- they're as subject to a lawsuit as they were a week ago.

What about other companies such as Red Hat? Nothing has changed for them, either. Red Hat is still safe, as it is a member of the Open Invention Network, which was created to deal with the potential for patent lawsuits from any party. OIN is a more complete solution for most parties involved than the Microsoft and Novell agreement, as it involves more parties.

The problem with OIN is that it only covers the members of OIN. Red Hat is protected, but any distributors of Red Hat code are not (such as CentOS), and neither are the customers of Red Hat (unless the customer has a patent protection contract with their supplier). Consequently, OIN serves to protect the original developers (1), but not any "downstream" distributors (2) or their customers (3).

But what about the GPL, section 7? Doesn't the Microsoft/Novell agreement violate it?

7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.

The simple solution is that this doesn't apply, as this agreement doesn't touch this clause at all. It's not a consequence of a court judgment, there is no allegation of patent infringement, and I haven't heard of any conditions that Microsoft requires Novell to follow in order for the code to be freely distributed. The Microsoft/Novell agreement primarily covers their customers, not their code, so there isn't a problem.

[0] But I did stay at a Holiday Inn last night!
[1] Computerworld Article.

Posted on 04 Nov 2006 | Path: /development/mono/ | Permalink

Mono.Fuse 0.4.0

Mono.Fuse is a C# binding for FUSE. This is the fourth major release

This release contains a few major changes to the public API for consistency and clarification purposes, the biggest of which is renaming Mono.Fuse.FileSystemEntry to Mono.Fuse.DirectoryEntry (which of course required changing Mono.Fuse.FileSystem.OnReadDirectory(), again!). Some of the Mono.Fuse.FileSystem properties were also renamed for consistency.

I'm still making no promises for API stability. The FileSystem virtual methods should be fairly stable, but the properties may continue to be flexible as I document them more fully (as I'm not entirely sure what the ramifications are for some of them, such as FileSystem.ReaddirSetsInode vs. FileSystem.SetsInode, and answering these questions will require reading the FUSE source).

API Changes from the previous release:

See the commit diff for specifics.


Mono.Fuse 0.4.0 is available from It can built with Mono 1.1.13 and later.

GIT Repository

A GIT repository for Mono.Fuse is at

Posted on 20 Sep 2006 | Path: /development/mono/ | Permalink

Naming, Mono.Fuse Documentation

I find naming to be difficult. What should a type be named, what should a member be named? What's consistent? What names are easily understandable in any context? What names sow confusion instead of clarity?

This is why writing documentation, as annoying as it can be (lots of repetition), is also useful: it forces a different viewpoint on the subject.

For example, Mono's monodocer program uses System.Reflection to generate an initial documentation stub for use within monodoc. As such, it shows you the public types which are actually within an assembly, not just what you thought was in the assembly, like the compiler-generated default constructors which are so easy to forget about.

I've documented every public type and member within Mono.Fuse: Mono.Fuse Documentation.

And if you haven't guessed by now, the types have changed, because writing documentation forces a different viewpoint on the subject, and shows out all of the glaring inconsistencies within the API. So much for my hope that the API would be reasonably stable after the 0.3.0 release. <Sigh>

Consequently, the docs are only ~90% useful for the most recent 0.3.0 release, as they document the forthcoming 0.4.0 release. I hope to get 0.4.0 out reasonably soon, though I still have to write release notes.

The GIT repository has been updated so that HEAD contains the 0.4.0 sources, so if you're really interested in using the current API, you can git-clone it for now.

Mono.Fuse Home Page

Posted on 20 Sep 2006 | Path: /development/mono/ | Permalink

Mono.Fuse 0.3.0

Mono.Fuse is a C# binding for FUSE. This is the third major release.

This release completely changes the public API for consistency and performance. Hopefully this will be the last API change, though I would appreciate any feedback on the current Mono.Fuse.FileSystem.OnReadDirectory API.

API Changes from the previous release:


Mono.Fuse 0.3.0 is available from It can built with Mono 1.1.13 and later.

GIT Repository

A GIT repository for Mono.Fuse is at

Posted on 11 Sep 2006 | Path: /development/mono/ | Permalink

Miguel's ReflectionFS

After the Mono.Fuse 0.2.1 release, Miguel de Icaza wrote a small Mono.Fuse program that exposed System.Reflection information as a filesystem.

With the Mono.Fuse 0.3.0 release, this sample no longer works, as the Mono.Fuse API changed. Thus, here is an updated version of the sample:

$ cp `pkg-config --variable=Libraries mono-fuse` .
$ gmcs ReflectionFS.cs -r:Mono.Fuse.dll -r:Mono.Posix.dll
$ mkdir t
$ mono ReflectionFS.exe t &
$ ls t/mscorlib/System.Diagnostics.ConditionalAttribute
ConditionString      GetType             ToString
Equals               get_TypeId          TypeId
get_ConditionString  IsDefaultAttribute
GetHashCode          Match
$ fusermount -u t

Note that the above requires that PKG_CONFIG_PATH contain a directory with the mono-fuse.pc file (created during Mono.Fuse installation), and that should be in LD_LIBRARY_PATH or a directory listed in /etc/

Mono.Fuse also contains other sample programs. In particular, RedirectFS-FH.cs is a straightforward port of FUSE's fusexmp_fh.c sample program, and shows a way to "redirect" a new mountpoint to display the contents of another existing directory. RedirectFS-FH.cs is actually an improvement, as fusexmp_fh.c just shows the contents of the / directory at any new mount point, while RedirectFS-FH can redirect to any other directory.

Posted on 11 Sep 2006 | Path: /development/mono/ | Permalink

Mono.Fuse, Take 2.1!

At Miguel's request, I've created a version of Mono.Fuse that doesn't depend upon Mono runtime changes. This should make it possible for more people to try it out.

Don't forget to read the README, as it contains build instructions and a description of how to run the included example program.

I should caution that the API isn't stable (I suspect Mono.Fuse.FileSystem.OnRead should probably become Mono.Fuse.FileSystem.OnReadFile, for one), and I look forward to any and all API suggestions that you can provide.

Two final notes: Mono.Fuse depends on FUSE, and FUSE is a Linux kernel module, so you'll need to run:

/sbin/modprobe fuse

as the root user before you can use any FUSE programs. You'll also need to install the FUSE user-space programs, as you must use the fusermount program to unmount a directory that has been mounted by FUSE, e.g.:

fusermount -u mount-point
Posted on 01 Sep 2006 | Path: /development/mono/ | Permalink

Mono.Fuse, Take 2!

See the original announcement for more on what Mono.Fuse is (in short: a C# binding for FUSE).

This is an update, releasing Mono.Fuse 0.2.0, and (more importantly) an updated set of patches to mcs, mono, and now mono-tools. The mcs and mono patches are required to build & run Mono.Fuse, while the mono-tools patch is optional and only necessary if you want to view the create-native-map.exe program.

See here for all patches and an overview.

The major change between this set of patches and the original set is one of approach: the original set tried to make the native MonoPosixHelper API public, which was deemed as unacceptable (as there's too much cruft in there that we don't want to maintain).

The new approach only adds public APIs to the Mono.Unix.Native.NativeConvert type, permitting managed code to copy any existing native instance of supported structures. For example:

  Mono.Unix.Native.NativeConvert.Copy (IntPtr source, 
      out Mono.Unix.Native.Stat destination);

copies a pointer to an existing native struct stat and copies it into the managed Mono.Unix.Native.Stat instance. There are equivalent methods to do the managed → native conversion as well.

Since this approach requires making far fewer public API changes to Mono.Posix and MonoPosixHelper (i.e. no public API changes to MonoPosixHelper, as it's an internal/private library), I hope that this will be more acceptable.

Here's to a quick review!

Updated to add a link to the overview page.

Posted on 01 Sep 2006 | Path: /development/mono/ | Permalink

Announcing Mono.Fuse

Mono.Fuse is a binding for the FUSE library, permitting user-space file systems to be written in C#.


I read Robert Love's announcement of beaglefs, a FUSE program that exposes Beagle searches as a filesystem. My first thought: Why wasn't that done in C# (considering that the rest of Beagle is C#)?

What about SULF?

Stackable User-Level Filesystem, or SULF, is a pre-existing FUSE binding in C#, started by Valient Gough in 2004.

Mono.Fuse has no relation to SULF, for three reasons:

  1. It goes to great efforts to avoid a Mono.Posix.dll dependency, duplicating Mono.Unix.Native.Stat (Fuse.Stat), Mono.Unix.Native.Statvfs (Fuse.StatFS), and many methods from Mono.Unix.Native.Syscall (Fuse.Wrapper).
  2. I don't like the SULF API. (Not that I spent a great deal of time looking at it, but what I did see I didn't like.)
  3. SULF wraps the FUSE kernel-level interface, while Mono.Fuse wraps the higher level libfuse C interface.

I find (1) the most appalling, if only because I'm the Mono.Posix maintainer and I'd like to see my work actually used. :-)

Once I started writing Mono.Fuse, I discovered a good reason to avoid Mono.Posix: it's currently impossible to use the native MonoPosixHelper shared library from outside of Mono. I figured this would be a good opportunity to rectify that, making it easier for additional libraries to build upon the Mono.Posix infrastructure.


Mono.Fuse requires patches to the mcs and mono modules, changes which need to be proposed and discussed.


The biggest problem with the mono module is that no headers are installed, making it difficult to make use of


map.h is the current map.h file generated by make-map.exe, with some major additions (detailed in the mcs section).

helper.h is the main include file, which includes map.h and declares all types/functions which cannot be generated by make-map.exe.

mono-config.h is necessary because it needs to contain platform-specific macros. In particular, Linux needs:

Mono_Posix_ToStatvfs (struct statvfs *to, struct Mono_Posix_Statvfs *to);

while OS X and *BSD need:

Mono_Posix_ToStatvfs (struct statfs *to, struct Mono_Posix_Statvfs *to);

Note struct statvfs vs. struct statfs. The mono/posix/helper.h header needs to "paper over" the difference, and thus needs to know which type the platform prefers. helper.h thus looks like:

  struct statvfs;
  int Mono_Posix_ToStatvfs (struct statvfs *from, 
      struct Mono_Posix_Statvfs *to);
  struct statfs;
  int Mono_Posix_ToStatvfs (struct statfs *from, 
      struct Mono_Posix_Statvfs *to);

One of MONO_HAVE_STATVFS or MONO_HAVE_STATFS would be defined in mono-config.h.


There are two major changes:

The MapAttribute attribute is public so that make-map.exe can use a publically exposed API for code generation purposes which can be used by other libraries (Mono.Fuse makes use of these changes).

make-map.exe can also generate structure declarations and delegate declarations in addition to P/Invoke function declarations, allowing for a better, automated interface between C and C#.

Previously, [Map] could only be used on enumerations.

Now, [Map] can be used on classes, structures, and delegates, to create a C declaration of the C# type, suitable for P/Invoke purposes, e.g. the C# code:

[Map] struct Stat {public FilePermissions mode;}

would generate the C declaration

struct Namespace_Stat {unsigned int mode;};

The MapAttribute.NativeType property is used to specify that type conversion functions should be generated, thus:

[Map ("struct stat")] struct Stat {public FilePermissions mode;}

would generate

struct Namespace_Stat {unsigned int mode;};
int Namespace_ToStat (struct stat *from, struct Namespace_Stat *to);
int Namespace_FromStat (struct Namespace_Stat *from, struct stat *to);

along with the actual implementations of Namespace_ToStat() and Namespace_FromStat().

The MapAttribute.NativeSymbolPrefix property is used to specify the C "namespace" to use:

[Map (NativeSymbolPrefix="Foo")] struct Stat {FilePermissiond mode;}


struct Foo_Stat {unsigned int mode;};

This prefix is also used for the conversion functions.

(You may be wondering why NativeSymbolPrefix exists at all. This is for reasonable symbol versioning -- make-map.exe currently has a "hack" in place to rename Mono.Unix(.Native) to Mono_Posix, a hack I'd like to remove, and NativeSymbolPrefix allows the Mono.Unix.Native types to have a Mono_Posix C namespace in a reasonably general manner.)

The previously internal Mono.Unix.HeaderAttribute has been removed. The HeaderAttribute.Includes and HeaderAttribute.Defines properties have been replaced with make-map.exe command-line arguments. In particular, HeaderAttribute.Includes has been replaced with --autoconf-header, --impl-header, --impl-macro, --public-header, and --public-macro (the first three modify the generated .c file, while the latter two modify the generated .h file).

Finally, make-map.exe has been renamed and moved from mcs/class/Mono.Posix/Mono.Unix.Native/make-map.exe to mcs/tools/create-native-map/create-native-map.exe.


  1. Go to for the patches and source download.
  2. Apply mcs.patch to a mcs checkout, rebuild, and install.
  3. Apply mono.patch to a mono checkout, rebuild, and install.
  4. Build mono-fuse-0.1.0.tar.gz in "the standard manner" (./configure ; make ; make install).


Posted on 29 Aug 2006 | Path: /development/mono/ | Permalink

Performance Comparison: IList<T> Between Arrays and List<T>

Rico Mariani recently asked a performance question: given the following code, which is faster, Sum(array), which converts a ushort[] to an IList<T>, or Sum(list), which uses the implicit conversion between List<T> and IList<T>.

using System;
using System.Collections.Generic;

class Test {
  static int Sum (IList<ushort> indeces)
    int result = 0;
    for (int i = 0; i < indeces.Count; ++i)
      result += indeces [i];
    return result;

  const int Size = 500000;

  public static void Main ()
    ushort[] array= new ushort [Size];
    DateTime start = DateTime.UtcNow;
    Sum (array);
    DateTime end = DateTime.UtcNow;
    Console.WriteLine ("    ushort[]: {0}", end-start);

    List<ushort> list = new List<ushort> (Size);
    for (int i = 0; i < Size; ++i) list.Add (0);
    start = DateTime.UtcNow;
    Sum (list);
    end = DateTime.UtcNow;
    Console.WriteLine ("List<ushort>: {0}", end-start);

Note that the question isn't about comparing the performance for constructing a ushort[] vs. a List<T>, but rather the use of an IList<ushort> backed by a ushort[] vs a List<ushort>.

The answer for Mono is that, oddly enough, List<ushort> is faster than ushort[]:

    ushort[]: 00:00:00.0690370
List<ushort>: 00:00:00.0368170

The question is, why?

The answer is, "magic." System.Array is a class with magical properties that can't be duplicated by custom classes. For example, all arrays, such as ushort[], inherit from System.Array, but only have an explicitly implemented IList indexer. What looks like an indexer usage results in completely different IL code; the compiler is involved, and must generate different code for an array access.

For example, an array access generates the IL code:

// int i = array [0];
ldarg.0    // load array
ldc.i4.0   // load index 0
ldelem.i4  // load element array [0]
stloc.0    // store into i

While an IList indexer access generates this IL code:

// object i = list [0];
ldarg.0    // load list
ldc.i4.0   // load index 0
callvirt instance object class [mscorlib]System.Collections.IList::get_Item(int32)
           // call IList.this [int]
stloc.0    // store into i

This difference in IL allows the JIT to optimize array access, since different IL is being generated only for arrays.

In .NET 2.0, System.Array got more magic: all array types implicitly implement IList<T> for the underlying array type, which is why the code above works (ushort[] implicitly implements IList<ushort>). However, this is provided by the runtime and is "magical," in that System.Reflection won't see that System.Array implements any generics interfaces. Magic.

On Mono, this is implemented via an indirection: arrays may derive from System.Array.InternalArray<T> instead of System.Array. InternalArray<T> implements IList<T>, permitting the implicit conversion from ushort[] to IList<ushort>.

However, this indirection has a performance impact: System.Array.InternalArray<T>.get_Item invokes System.Array.InternalArray<T>.GetGenericValueImpl, which is an internal call. This is the source of the overhead, as can be seen with mono --profile=default:stat program.exe:

prof counts: total/unmanaged: 172/97
     27 15.79 % mono
     12  7.02 % mono(mono_metadata_decode_row
     11  6.43 % Enumerator:MoveNext ()
     10  5.85 % Test:Sum (System.Collections.Generic.IList`1)
     10  5.85 % (wrapper managed-to-native) InternalArray`1:GetGenericValueImpl (int,uint16&)
     10  5.85 % InternalEnumerator:MoveNext ()

To conclude, List<ushort> is faster than ushort[], when accessed via an IList<ushort> reference, because the ushort[] can't be accessed as a normal array, procluding the usual runtime optimizations:

It should be also noted that because of this "magic," all arrays under .NET 2.0 have more overhead than the same arrays under .NET 1.1, because of the need to support the "magic" generics interfaces. This could be optimized to save memory, such that if you never access the array via a generic interface no memory is used, but Mono has not performed such an optimization yet.

Update: After discussing this on #mono, Paolo Molaro implemented an optimization which makes the array usage much faster. Now it's only slightly slower than IList<T>:

    ushort[]: 00:00:00.0133390
List<ushort>: 00:00:00.0132830

Update 2: Rico Mariani has posted his .NET performance analysis. The key take home point? "Arrays are magic."

Posted on 10 Mar 2006 | Path: /development/mono/ | Permalink

System.Diagnostics Tracing Support

As I wrote Mono's original Trace support infrastructure, I should probably get around to implementing the much improved 2.0 version. Which means I first need to understand it. Fortunately Mike Rousos is documenting how it works:

Posted on 21 Sep 2005 | Path: /development/mono/ | Permalink

Mono.Unix Reorganization

Brad Abrams, author of the Framework Design Guidelines, recently posted his precon slides for the recent PDC.

I quickly read through it to see how it well Mono.Unix follows it. I also recently ran FxCop on Mono.Posix.dll, with many interesting results.

One major point that Abrams' slides pointed out is on page 53:

This is completely different from how Mono.Unix currently operates, as it places both low-level classes such as Syscall and high-level classes such as UnixStream into the same namespace. The only difference between thw low-level and high-level is the Unix prefix present on the high-level classes. This is a problem.

It's a problem because when looking at the class view or the documentation you get lost looking at the dozens of low-level types such as AccessMode, ConfStr, and Syscall, as the high-level wrapper classes -- having a Unix prefix, will be after most of the types developers (hopefully) won't be interested in.

My solution is to separate the low-level classes into a Mono.Unix.Native namespace. The Mono.Unix namespace will be used for high-level types following CLS conventions (such as PascalCased types and methods) such as UnixFileSystemInfo, and for .NET integration classes such as UnixStream.

This change went into mono-HEAD today. All of the existing low-level Mono.Unix types have been marked [Obsolete], with messages directing users to use the appropriate Mono.Unix.Native types. Alas, some of these low-level types are used in properties or as the return types of methods in the high-level classes. These have been marked [Obsolete] for now, with a message stating that the property type or method return type will change in the next release. "Next release" in this case will be 1.1.11 or 1.2 (as I'm assuming the release of 1.1.10, which is when developers will actually see these messages if they don't follow mono-HEAD).

I'm also interested in better CLS compliance in the high-level classes. At present many of them are [CLSCompliant(false)] because they use non-CLS-compatible types such as uint or ulong. Should these be changed to CLS-compliant types? Any such changes should be done now (i.e. before 1.1.10), to allow any migration time.

Posted on 20 Sep 2005 | Path: /development/mono/ | Permalink

Major Change to Nullable Types

Poor Martin Baulig -- Microsoft has changed the design of System.Nullable so that it's far more integrated into the runtime. This change impacts things as fundamental as the box and unbox instructions...

Posted on 12 Aug 2005 | Path: /development/mono/ | Permalink

Frogger under Mono

DotGNU Portable.NET provides a .NET Curses wrapper for creating nifty console-based programs using the ncurses library. It is possible to run this under Mono.

Alas, the provided configure script doesn't work very well with Mono, so we need to do things manually. The following will allow you to build Portable.NET's Curses.dll ncurses wrapper and run the bundled Frogger game.

  1. Download and extract pnetcurses:
    $ wget
    $ tar xzf pnetcurses-0.0.2.tar.gz
    $ cd pnetcurses-0.0.2
  2. Build the helper library. You can ignore the warnings produced by GCC.
    $ cd help
    $ gcc -shared -o *.c -lncurses
    input.c: In function `CursesHelpGetNextChar':
    input.c:69: warning: integer constant is too large for "long" type
    $ cd ..
  3. Build the Curses.dll wrapper library:
    $ cd src
    $ mcs -t:library *.cs -out:Curses.dll
    Compilation succeeded
    $ cd ..
  4. Create a Curses.dll.config file so that Mono can load the appropriate native libraries. This file should go into the same directory as Curses.dll.
    $ cd src
    $ cat > Curses.dll.config <<EOF
      <dllmap dll="cygncurses5.dll" 
      <dllmap dll="libcsharpcurses-0-0-1.dll"
    $ cd ..
  5. Build the Frogger.exe demo program:
    $ cd frogger
    $ cp ../src/Curses.dll* .
    $ mcs -t:exe -out:Frogger.exe -r:Curses.dll *.cs
    Compilation succeeded
    $ cd ..
  6. Execute Frogger.exe with Mono:
    $ cd frogger
    $ LD_LIBRARY_PATH=`pwd`/../help mono Frogger.exe
Posted on 08 Apr 2005 | Path: /development/mono/ | Permalink

Mono.Unix Documentation Stubs

I've just added the Mono.Unix documentation stubs. Next step is to start documenting the members, mostly by copy/pasting the current Mono.Posix documentation.

Lessons learned:

  1. Documentation can be out of date, especially the documentation to update existing documentation. Oops. We're supposed to use the monodocer program.
  2. The correct command to generate/update documentation is: monodocer -assembly:assembly-name -path:directory-name.
  3. The monodocer program didn't like me (it generated a NullReferenceException because of a class in the global namespace). Patch in svn-trunk.
  4. Documentation is an alternate view of a class library

That final point is the major point: through it, I realized that several methods in Mono.Unix which should be private were instead public. Oops. Obviously I need to document things more often...

Posted on 30 Jan 2005 | Path: /development/mono/ | Permalink