Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

July 7

Xamarin Test Cloud to Support Appium Framework

With Xamarin Test Cloud, our goal has always been to provide a mobile app testing service that developers love while solving the challenges that prevent testing from being a seamless part of the development process. Developers writing Xamarin Test Cloud tests have the choice of using Ruby and the Calabash testing framework or C# with Xamarin.UITest.

Today, we’re excited to announce that we’ll be adding support for writing tests with the Appium automation framework and its multiple language bindings, including Java, JavaScript, Python, and PHP, and we’d like to invite you to join our early access program.



A Fully-Integrated Testing Framework

Xamarin Test Cloud gives developers a fully-integrated testing framework by tightly integrating Calabash Ruby and C# Xamarin.UITest with our device cloud infrastructure, reporting capability, and development environment integration. Including the community-led Appium testing framework in Xamarin Test Cloud allows us to help even more developers increase app quality by eliminating challenges that hold back most mobile testing initiatives: deep integration with your development processes and environment and a device management infrastructure that saves thousands of hours and dollars managing devices.

Developer Productivity

Xamarin Test Cloud automates the execution of your tests, helping you find bugs earlier in the development process and preventing regressions. It integrates with popular Continuous Integration (CI) systems like Jenkins, TeamCity, and Team Foundation Server to automatically initiate tests and report results back into CI systems. New features in Xamarin Studio and Xamarin for Visual Studio upload and execute tests to Xamarin Test Cloud directly from the developer environment, making app testing a seamless part of the developer experience.

Appium developers will be able to take advantage of Xamarin Test Cloud’s device management infrastructure, which utilizes real devices running multiple operating system versions and automatically loads, unloads, and securely restores each device to factory OEM standards between each test run, saving thousands of hours of manual labor and device expenses.

Get Started

If you’re already using Appium and are interested in being part of our early access program, let us know by filling out this form. You don’t have to be a Xamarin customer – your participation and feedback is important! We can’t offer access to everyone on day one, but Appium will soon be a fully supported framework alongside Calabash and Xamarin.UITest in Xamarin Test Cloud.

We’ll be doing a live webinar on Xamarin Test Cloud on Wednesday, July 8 at 8:30 am PT covering different mobile testing methodologies and providing an in-depth overview of how it works, along with other the exciting plans in our product roadmap. Click here to register.

Keep building great apps!

The post Xamarin Test Cloud to Support Appium Framework appeared first on Xamarin Blog.

New Development Snapshot

Final 8.1 development snapshot. Release candidate 0 will be next (after .NET 4.6 RTM).


  • Updated HOWTO reference to OpenJDK 8u45.
  • Extract Windows version from kernel32.dll to avoid version lie. Idea stolen from OpenJDK.
  • Moved unused field removal optimization to a later stage in the compilation.
  • Made field removal optimization check more strict to only remove final fields and not remove fields that have annotations.
  • Added support for automatically passing in fields to "native" methods.
  • Various minor clean ups.
  • Added FieldWrapper.IsSerialVersionUID property to properly (and consistently) detect serialVersionUID fields.
  • Improved side effect free static initializer detection.
  • Improved -removeassertions ikvmc optimization to remove more code (esp. allow otherwise empty static initializers to be optimized away).

Binaries available here: ikvmbin-8.1.5666.zip

July 6

Connect to Customers with My Shoppe

Does your business have one or more storefronts? With the amount of competition today, it can be difficult to differentiate your shops from the masses. You need a convenient way to promote your business, enable customers to find you, and make it easy to find information about your offerings. What if you could combine all of this in a mobile app that also makes it easy for customers to engage with your stores when they have a great experience or need help?

My Shoppe Template App Hero Small

Introducing the My Shoppe Template App

Our newest template app, My Shoppe, solves this problem. My Shoppe enables you, as business owner or operator, to easily connect with your customer base in several ways. Create a browsable list of shop locations so customers can find the nearest location, call the shop, see shop hours, and even get directions to the shop with a single click. In addition, customers can easily provide feedback on the experience they have in-store.

My Shoppe is based on Xamarin.Forms, so 100% of its code can be shared across iOS, Android, and Windows Phone. Several powerful Plugins for Xamarin are leveraged to provide platform-specific features from the shared codebase.

Built on Azure Mobile Apps

AzureMobileAppsLeveraging Azure Mobile Apps as My Shoppe’s backend enables a super fast online/offline synchronized experience across all platforms from a single code base. Simply set up Azure Mobile Apps as your .NET backend and you’re ready to go.

Highly Customizable

My Shoppe leverages Xamarin.Forms to provide an incredible shared cross-platform user interface that is also highly customizable; for instance, you can adjust the colors to match your shop’s branding. My Shoppe was built to be backend-independent and the data store implements a simple interface, enabling you to swap out your backend for any other solution, such as json, Parse, Couchbase, or a local SQLite database.

And, of course, since My Shoppe is a template app, you can easily add and extend My Shoppe with your own unique features and enhancements using the familiar MVVM architecture that has been implemented.

Administer Easily

My Shoppe is more than a customer facing mobile app. It also includes a complete administration app for iOS and Android that enables you to work with the same source backend as the consumer app. Easily add and manage your shops from your mobile device and reply to customer feedback with a single click, or call them directly from the app.

My Shoppe Template App Admin Hero

In Action

Take a peek at My Shoppe in action:

Give it a Spin

You can build your own mobile app for your shop today by downloading the entire template app source code from GitHub. Want to give it a try without downloading the code? You can download our sample My Shoppe app today on Android’s Google Play or Windows Phone Marketplace. My Shoppe will be available soon on the iOS App Store.

The post Connect to Customers with My Shoppe appeared first on Xamarin Blog.

Help us make Unity even better: Participate in user research!

We are constantly working to improve the Unity user experience. In order to do that in a meaningful way, we need to really know you: the user.

In the User Experience Team we conduct a lot of user research and user tests. Sometimes we go out in the field and observe you in your natural habitat (your workplace). Other times we meet you for an interview. We also invite you over for usability tests of our newest features.

Now you can sign up to participate in our user studies and research, and make a direct contribution to the future awesomeness of Unity!

So, how does it work?

You sign up by completing a survey that will let us know more about your skills, experience, interests, recent projects, etc. Whenever we run a research project and find that you are a good match for it, we will reach out and get in touch with you.

Who can sign up?

If you are a Unity user, you can sign up, regardless if you are an absolute beginner, an experienced superuser or anything in between.

Unity has to empower all of its different types of users.

We hope you will help us make that happen.

You can find more information and sign up to participate in user research right here!

Thank you,

The User Experience Team


July 2

IL2CPP Internals: P/Invoke Wrappers

This is the sixth post in the IL2CPP Internals series. In this post, we will explore how il2cpp.exe generates wrapper methods and types use for interop between managed and native code. Specifically, we will look at the difference between blittable and non-blittable types, understand string and array marshaling, and learn about the cost of marshaling.

I’ve written a good bit of managed to native interop code in my days, but getting p/invoke declarations right in C# is still difficult, to say the least. Understanding what the runtime is doing to marshal my objects is even more of a mystery. Since IL2CPP does most of its marshaling in generated C++ code, we can see (and even debug!) its behavior, providing much better insight for troubleshooting and performance analysis.

This post does not aim to provide general information about marshaling and native interop. That is a wide topic, too large for one post. The Unity documentation discusses how native plugins interact with Unity. Both Mono and Microsoft provide plenty of excellent information about p/invoke in general.

As with all of the posts in this series, we will be exploring code that is subject to change and, in fact, is likely to change in a newer version of Unity. However, the concepts should remain the same. Please take everything discussed in this series as implementation details. We like to expose and discuss details like this when it is possible though!

The setup

For this post, I’m using Unity 5.0.2p4 on OSX. I’ll build for the iOS platform, using an “Architecture” value of “Universal”. I’ve built my native code for this example in Xcode 6.3.2 as a static library for both ARMv7 and ARM64.

The native code looks like this:

#include <cstring>
#include <cmath>

extern "C" {
int Increment(int i) {
  return i + 1;

bool StringsMatch(const char* l, const char* r) {
  return strcmp(l, r) == 0;

struct Vector {
  float x;
  float y;
  float z;

float ComputeLength(Vector v) {
  return sqrt(v.x*v.x + v.y*v.y + v.z*v.z);

void SetX(Vector* v, float value) {
  v->x = value;

struct Boss {
  char* name;
  int health;

bool IsBossDead(Boss b) {
  return b.health == 0;

int SumArrayElements(int* elements, int size) {
  int sum = 0;
  for (int i = 0; i < size; ++i) {
    sum += elements[i];
  return sum;

int SumBossHealth(Boss* bosses, int size) {
  int sum = 0;
  for (int i = 0; i < size; ++i) {
    sum += bosses[i].health;
  return sum;


The scripting code in Unity is again in the HelloWorld.cs file. It looks like this:

void Start () {
  Debug.Log (string.Format ("Using a blittable argument: {0}", Increment (42)));
  Debug.Log (string.Format ("Marshaling strings: {0}", StringsMatch ("Hello", "Goodbye")));

  var vector = new Vector (1.0f, 2.0f, 3.0f);
  Debug.Log (string.Format ("Marshaling a blittable struct: {0}", ComputeLength (vector)));
  SetX (ref vector, 42.0f);
  Debug.Log (string.Format ("Marshaling a blittable struct by reference: {0}", vector.x));

  Debug.Log (string.Format ("Marshaling a non-blittable struct: {0}", IsBossDead (new Boss("Final Boss", 100))));

  int[] values = {1, 2, 3, 4};
  Debug.Log(string.Format("Marshaling an array: {0}", SumArrayElements(values, values.Length)));
  Boss[] bosses = {new Boss("First Boss", 25), new Boss("Second Boss", 45)};
  Debug.Log(string.Format("Marshaling an array by reference: {0}", SumBossHealth(bosses, bosses.Length)));

Each of the method calls in this code are made into the native code shown above. We will look at the managed method declaration for each method as we see it later in the post.

Why do we need marshaling?

Since IL2CPP is already generating C++ code, why do we need marshaling from C# to C++ code at all? Although the generated C++ code is native code, the representation of types in C# differs from C++ in a number of cases, so the IL2CPP runtime must be able to convert back and forth from representations on both sides. The il2cpp.exe utility does this both for types and methods.

In managed code, all types can be categorized as either blittable or non-blittable. Blittable types have the same representation in managed and native code (e.g. byte, int, float). Non-blittable types have a different representation in managed and native code (e.g. bool, string, array types). As such, blittable types can be passed to native code directly, but non-blittable types require some conversion before they can be passed to native code. Often this conversion involves new memory allocation.

In order to tell the managed code compiler that a given method is implemented in native code, the extern keyword is used in C#. This keyword, along with a DllImport attribute, allows the managed code runtime to find the native method definition and call it. The il2cpp.exe utility generates a wrapper C++ method for each extern method. This wrapper performs a few important tasks:

  • It defines a typedef for the native method which is used to invoke the method via a function pointer.
  • It resolves the native method by name, getting a function pointer to that method.
  • It converts the arguments from their managed representation to their native representation (if necessary).
  • It calls the native method.
  • It converts the return value of the method from its native representation to its managed representation (if necessary).
  • In converts any out or ref arguments from from their native representation to their managed representation (if necessary).

We’ll take a look at the generated wrapper methods for some extern method declarations next.

Marshaling a blittable type

The simplest kind of extern wrapper only deals with blittable types.

private extern static int Increment(int value);

In the Bulk_Assembly-CSharp_0.cpp file, search for the string “HelloWorld_Increment_m3”. The wrapper function for the Increment method looks like this:

extern "C" {int32_t DEFAULT_CALL Increment(int32_t);}
extern "C" int32_t HelloWorld_Increment_m3 (Object_t * __this /* static, unused */, int32_t ___value, const MethodInfo* method)
  typedef int32_t (DEFAULT_CALL *PInvokeFunc) (int32_t);
  static PInvokeFunc _il2cpp_pinvoke_func;
  if (!_il2cpp_pinvoke_func)
    _il2cpp_pinvoke_func = (PInvokeFunc)Increment;
    if (_il2cpp_pinvoke_func == NULL)
      il2cpp_codegen_raise_exception(il2cpp_codegen_get_not_supported_exception("Unable to find method for p/invoke: 'Increment'"));

  int32_t _return_value = _il2cpp_pinvoke_func(___value);

  return _return_value;

First, note the typedef for the native function signature:

typedef int32_t (DEFAULT_CALL *PInvokeFunc) (int32_t);

Something similar will show up in each of the wrapper functions. This native function accepts a single int32_t and returns an int32_t.

Next, the wrapper finds the proper function pointer and stores it in a static variable:

_il2cpp_pinvoke_func = (PInvokeFunc)Increment;

Here the Increment function actually comes from an extern statement (in the C++ code):

extern "C" {int32_t DEFAULT_CALL Increment(int32_t);}

On iOS, native methods are statically linked into a single binary (indicated by the “__Internal” string in the DllImport attribute), so the IL2CPP runtime does nothing to look up the function pointer. Instead, this extern statement informs the linker to find the proper function at link time. On other platforms, the IL2CPP runtime may perform a lookup (if necessary) using a platform-specific API method to obtain this function pointer.

Practically, this means that on iOS, an incorrect p/invoke signature in managed code will show up as a linker error in the generated code. The error will not occur at  runtime. So all p/invoke signatures need to be correct, even with they are not used at runtime.

Finally, the native method is called via the function pointer, and the return value is returned. Notice that the argument is passed to the native function by value, so any changes to its value in the native code will not be available in the managed code, as we would expect.

Marshaling a non-blittable type

Things get a little more exciting with a non-blittable type, like string. Recall from an earlier post that strings in IL2CPP are represented as an array of two-byte characters encoded via UTF-16, prefixed by a 4-byte length value. This representation does not match either the char* or wchar_t* representations of strings in C on iOS, so we have to do some conversion. If we look at the StringsMatch method (HelloWorld_StringsMatch_m4 in the generated code):

[return: MarshalAs(UnmanagedType.U1)]
private extern static bool StringsMatch([MarshalAs(UnmanagedType.LPStr)]string l, [MarshalAs(UnmanagedType.LPStr)]string r);

we can see that each string argument will be converted to a char* (due to the UnmangedType.LPStr directive).

typedef uint8_t (DEFAULT_CALL *PInvokeFunc) (char*, char*);

The conversion looks like this (for the first argument):

char* ____l_marshaled = { 0 };
____l_marshaled = il2cpp_codegen_marshal_string(___l);

A new char buffer of the proper length is allocated, and the contents of the string are copied into the new buffer. Of course, after the native method is called we need to clean up those allocated buffers:

____l_marshaled = NULL;

So marshaling a non-blittable type like string can be costly.

Marshaling a user-defined type

Simple types like int and string are nice, but what about a more complex, user defined type? Suppose we want to marshal the Vector structure above, which contains three float values. It turns out that a user defined type is blittable if and only if all of its fields are blittable. So we can call ComputeLength (HelloWorld_ComputeLength_m5 in the generated code) without any need to convert the argument:

typedef float (DEFAULT_CALL *PInvokeFunc) (Vector_t1 );

// I’ve omitted the function pointer code.

float _return_value = _il2cpp_pinvoke_func(___v);
return _return_value;

Notice that the argument is passed by value, just as it was for the initial example when the argument type was int. If we want to modify the instance of Vector and see those changes in managed code, we need to pass it by reference, as in the SetX method (HelloWorld_SetX_m6):

typedef float (DEFAULT_CALL *PInvokeFunc) (Vector_t1 *, float);

Vector_t1 * ____v_marshaled = { 0 };
Vector_t1  ____v_marshaled_dereferenced = { 0 };
____v_marshaled_dereferenced = *___v;
____v_marshaled = &____v_marshaled_dereferenced;

float _return_value = _il2cpp_pinvoke_func(____v_marshaled, ___value);

Vector_t1  ____v_result_dereferenced = { 0 };
Vector_t1 * ____v_result = &____v_result_dereferenced;
*____v_result = *____v_marshaled;
*___v = *____v_result;

return _return_value;

Here the Vector argument is passed as a pointer to native code. The generated code goes through a bit of a rigmarole, but it is basically creating a local variable of the same type, copying the value of the argument to the local, then calling the native method with a pointer to that local variable. After the native function returns, the value in the local variable is copied back into the argument, and that value is available in the managed code then.

Marshaling a non-blittable user defined type

A non-blittable user defined type, like the Boss type defined above can also be marshaled, but with a little more work. Each field of this type must be marshaled to its native representation. Also, the generated C++ code needs a representation of the managed type that matches the representation in the native code.

Let’s take a look at the IsBossDead extern declaration:

[return: MarshalAs(UnmanagedType.U1)]
private extern static bool IsBossDead(Boss b);

The wrapper for this method is named HelloWorld_IsBossDead_m7:

extern "C" bool HelloWorld_IsBossDead_m7 (Object_t * __this /* static, unused */, Boss_t2  ___b, const MethodInfo* method)
  typedef uint8_t (DEFAULT_CALL *PInvokeFunc) (Boss_t2_marshaled);

  Boss_t2_marshaled ____b_marshaled = { 0 };
  Boss_t2_marshal(___b, ____b_marshaled);
  uint8_t _return_value = _il2cpp_pinvoke_func(____b_marshaled);

  return _return_value;

The argument is passed to the wrapper function as type Boss_t2, which is the generated type for the Boss struct. Notice that it is passed to the native function with a different type: Boss_t2_marshaled. If we jump to the definition of this type, we can see that it matches the definition of the Boss struct in our C++ static library code:

struct Boss_t2_marshaled
  char* ___name_0;
  int32_t ___health_1;

We again used the UnmanagedType.LPStr directive in C# to indicate that the string field should be marshaled as a char*. If you find yourself debugging a problem with a non-blittable user-defined type, it is very helpful to look at this _marshaled struct in the generated code. If the field layout does not match the native side, then a marshaling directive in managed code might be incorrect.

The Boss_t2_marshal function is a generated function which marshals each field, and the Boss_t2_marshal_cleanup frees any memory allocated during that marshaling process.

Marshaling an array

Finally, we will explore how arrays of blittable and non-blittable types are marshaled. The SumArrayElements method is passed an array of integers:

private extern static int SumArrayElements(int[] elements, int size);

This array is marshaled, but since the element type of the array (int) is blittable, the cost to marshal it is very small:

int32_t* ____elements_marshaled = { 0 };
____elements_marshaled = il2cpp_codegen_marshal_array<int32_t>((Il2CppCodeGenArray*)___elements);

The il2cpp_codegen_marshal_array function simply returns a pointer to the existing managed array memory, that’s it!

However, marshaling an array of non-blittable types is much more expensive. The SumBossHealth method passes an array of Boss instances:

private extern static int SumBossHealth(Boss[] bosses, int size);

It’s wrapper has to allocate a new array, then marshal each element individually:

Boss_t2_marshaled* ____bosses_marshaled = { 0 };
size_t ____bosses_Length = 0;
if (___bosses != NULL)
  ____bosses_Length = ((Il2CppCodeGenArray*)___bosses)->max_length;
  ____bosses_marshaled = il2cpp_codegen_marshal_allocate_array<Boss_t2_marshaled>(____bosses_Length);

for (int i = 0; i < ____bosses_Length; i++)
  Boss_t2  const& item = *reinterpret_cast<Boss_t2 *>(SZArrayLdElema((Il2CppCodeGenArray*)___bosses, i));
  Boss_t2_marshal(item, (____bosses_marshaled)[i]);

Of course all of these allocations are cleaned up after the native method call is completed as well.


The IL2CPP scripting backend supports the same marshalling behaviors as the Mono scripting backend. Because IL2CPP produces generated wrappers for extern methods and types, it is possible to see the cost of managed to native interop calls. For blittable types, this cost is often not too bad, but non-blittable types can quickly make interop very expensive. As usual, we’ve just scratched the surface of marshaling in this post. Please explore the generated code more to see how marshaling is done for return values and out parameters, native function pointers and managed delegates, and user-defined reference types.

Next time we will explore how IL2CPP integrates with the garbage collector.

Xamarin Test Cloud Now Available to All Xamarin Developers

We started Xamarin because we want to help developers build apps they can be proud of and provide you with the tools you need to ensure that your apps do what they were designed to do.

A user’s perspective about you or your business is greatly impacted by your mobile app. A crash, a hang, or broken functionality will cause low app ratings or even user abandonment. Most developers aren’t doing systematic testing of their mobile apps because current tools and services are too hard to use.

That’s why we’re happy to announce that as of today, all Xamarin Platform subscriptions include 60 Xamarin Test Cloud device minutes per month. Every Xamarin developer can immediately take advantage of this new benefit to start automating UI testing for mobile apps written on any platform on our industry-leading catalog of over 1,600 real iOS and Android smartphones and tablets.

How it Works

Starting a new app in Xamarin Studio creates a C# Xamarin.UITest project with a basic test to make sure your app loads and give you a starting point for writing additional tests. You can upload your tests directly from Xamarin Studio and Visual Studio, incorporating mobile testing directly into your development processes to quickly verify apps work on a variety of hardware.

Executing a test is now as easy as a build or debug operation in Xamarin Studio or Visual Studio, and your Xamarin.UITest project can reside in the same solution as your Xamarin app, making it easy to keep your app code and tests in sync.

Upload C# Xamarin.UITest mobile tests directly from Xamarin Studio and Visual Studio

Upload C# Xamarin.UITest mobile tests directly from Xamarin Studio and Visual Studio


You specify the devices you want to test when initiating a test run, either by market share data we maintain for multiple geographies, or by the extensive filtering available for device type, manufacturer, OS version, processor, and other factors.

Select which devices to test in Test Cloud

Select which devices to test in Xamarin Test Cloud


Xamarin Test Cloud makes it easy to quickly find visual inconsistencies by comparing results across dozens of devices at a time after your test has executed, displaying screenshots synchronized to the same test steps. The service also offers video recordings of tests, making it easier to review the overall user experience on a variety of devices.

Xamarin Test Cloud

Xamarin Test Cloud test results


We’ve been using Xamarin Test Cloud internally on our sample apps to make sure they run properly and look great. One of these test runs uncovered that the screen rendered correctly on an iPhone 6, but had unexpected whitespace on an iPad. Bugs like these wouldn’t have been found without Xamarin Test Cloud, which enabled us to quickly diagnose the issue and correct the code.

Xamarin Test Cloud

What’s a Device Minute?

Device minutes are only consumed when the test runs on an actual device, regardless of whether you’re running tests serially on one device, or parallelized for faster results. If you need more minutes, you can sign up for one of our Xamarin Test Cloud plans.

Get Started

To start using your Xamarin Test Cloud device time today, visit testcloud.xamarin.com. Our online documentation also has great guides on how to write mobile tests in C# to test any app, whether or not it’s built with Xamarin.

We’ll be doing a live webinar on Wednesday, July 8 at 8:30 am PT covering different mobile testing methodologies and providing an in-depth overview of how Xamarin Test Cloud works. We’ll also discuss the exciting plans in our product roadmap. Click here to register.

Keep building great apps!

The post Xamarin Test Cloud Now Available to All Xamarin Developers appeared first on Xamarin Blog.

July 1

iOS 9 Preview Now Available

ios9 iconWe’re excited to announce that we have just published our first iOS 9 API preview release in Xamarin.iOS. This preview includes support for an array of exciting new features that you can start developing with today, including CoreSpotlight, Model I/O, and ReplayKit as well as improvements to Contacts APIs.

Installing the iOS 9 Preview

You can download the iOS 9 Preview for Xamarin Studio directly from the Xamarin Developer site. Upcoming previews will add even more support for iOS 9, including support for Visual Studio.

Important: Service Release for iOS 9 Compatibility

In addition to our iOS 9 preview release, we have also published an iOS release to our update channels addressing two issues that cause many Xamarin apps to crash on startup on Apple’s OS preview for iOS 9.

At WWDC last month, Apple announced that a public preview of iOS 9 will be made available to iOS users this July. To ensure your published apps run smoothly on Apple’s iOS 9 public preview this month, we recommend that Xamarin.iOS developers download the latest release from our Stable updater channel and rebuild and resubmit their apps to the App Store using Xcode 6. This will enable your apps to run on the iOS 9 OS preview and ensure your apps are ready for the public release of iOS 9 this fall.

You can read more about these updates in our iOS 9 Compatibility Guide.

The post iOS 9 Preview Now Available appeared first on Xamarin Blog.

The State of Unity on Linux

Hello lovely people!

Last week at Unite Europe, the Unity roadmap was made public, and it included a highly-voted feature on our feedback site: a Linux port of the Unity editor.  This past weekend I wrote a post on my personal blog about my own thoughts about our experience porting the Unity editor to Linux.  It turned out to be a pretty popular post, and it was amazing to see so many positive comments and reaction from our community, so we thought it would be nice to do something a bit more ‘official’ on the company blog and explain what you’ll be able to expect from our Linux port.

Unity was originally written for Mac OS X, and the Windows port came along in 2009 with the release of Unity 2.5.  Porting Unity from Mac to Windows was already a lot of work, and as you can imagine, Unity has grown considerably in size and complexity since 2009.  So porting to a third platform has been a lot of (very fun) work and taken a lot of time.

There are some of us who have been working on the Linux port of the editor since the beginning (which started in 2011 at an early ‘Ninja Camp’, according to our version control history), but several different people at Unity have helped work on one aspect or another along the way (lately it has been Levi spending the most time on the project, with myself and others, helping whenever/however possible, so buy him a beer if you see him).  Like I mentioned in my personal blog post, a lot of focus during this time has been on dealing with case-sensitivity issues (NTFS is case-insensitive, as is HFS+ by default; Unity doesn’t work on a case-sensitive system — sorry about that!) and native window management / input handling.  But we’re getting there!

What We Expect it Will Do

  • Run on 64-bit Linux (just like with our player, the ‘official’ support will be for Ubuntu due to its market share, and just like with our player, it should run on most modern Linux distributions); the earliest version of Ubuntu supported will be 12.04 (which is what our build/test farm is running).
  • Export to all of the same platforms as the Mac OS X editor (except for iOS; maybe someday we’ll enable exporting to iOS the same way we do from the Windows editor, but not initially)
  • Import all asset types not dependent on non-portable 3rd-party middleware
  • Support global illumination, occlusion culling, and all other systems reliant on portable 3rd-party middleware


  • It will require modern, vendor-provided graphics drivers
  • Some of the model importers that rely on external applications (i.e, 3ds Max and SketchUp) won’t work; the workaround is to export to FBX instead

The Plan Right Now: An Experimental Build

The Linux port of Unity currently lives in an internally ‘forked’ repo.  Our plan is currently to prepare an early experimental build for you from this fork (that is kept more or less in sync with Unity’s mainline development branch) that you will be able to try out.  Based on how that experiment goes, we’ll figure out if it’s something we can sustain as an official port alongside our Mac and Windows editors (the Linux runtime support was also released as a preview initially, due to concerns about support and the fragmentation of Linux distributions, and the support burden turned out to be very low, despite a very significant percentage of Linux games on Steam being made with Unity, so I’m hopeful; we’ll have to see how it goes).

It’s been a really long time and I couldn’t be more excited.  Levi, myself, and all of the other people who have helped with the Linux port over the years (the list is pretty long!) can’t wait to get it into your hands.

P.S. Here are some more teaser screenshots:

blacksmith-2 blacksmith-1 blacksmith-3 bridge-3 bridge-4 bridge-6 Unity Editor on Linux crs-2 Unity Editor on Linux

P.P.S – We’re really interested in hearing how you will use the Linux Editor — what platforms you will be exporting to, whether you’re interested specifically in doing regular development on Linux or mostly interested in automated build pipelines, etc.

Much love from Unity,

Na’Tosha (@natosha_bard)

Towards Semantic Version Control

The new release we’re announcing today, BL677, includes a feature that pretty much explains what our vision for the future is: semantic version control.

It may sound like big words but it is a pretty simple concept: the version control “understands” the code. So when you diff C#, Java code, C, VB.net (and hopefully all languages in the near future), it knows how to handle it:

Build Time Code Generation in MSBuild

Build-time code generation is a really powerful way to automate repetitive parts of your code. It can save time, reduce frustration, and eliminate a source of copy/paste bugs.

This is something I'm familiar with due to my past work on MonoDevelop's tooling for ASP.NET, T4 and Moonlight, and designing and/or implementing similar systems for Xamarin.iOS and Xamarin.Android. However, I haven't seen any good documentation on it, so I decided to write an article to outline the basics.

This isn't just something for custom project types, it's also something that you can include in NuGets, since they can include MSBuild logic.


The basic idea is to generate C# code from other files in the project, and include it in the build. This can be to generate helpers, for example CodeBehind for views (ASPX, XAML), or to process simple DSLs (T4), or any other purpose you can imagine.

MSBuild makes this pretty easy. You can simply hook a custom target before the Compile target, and have it emit a Compile item based on whatever input items you want. For the purposes of this guide I'm going to assume you're comfortable with enough MSBuild to understand that - if you're not, the MSDN docs are pretty good for the basics.

The challenge is to include the generated C# in code completion, and update it automatically.

An IDE plugin can do this fairly easily - see for example the Generator mechanism used by T4, and the *.designer.cs file generated by the old Windows Forms and ASP.NET designers. However, doing it this way has several downsides, for example you have to check their output into source control, and they won't update if you edit files outside the IDE. Build-time generation, as used for XAML, is a better option in most cases.

This article describes how to implement the same model used by WPF/Silverlight/Xamarin.Forms XAML.

Generating the Code

First, you need a build target that updates the generated files, emits them into the intermediate output directory, and injects them to the Compile ItemGroup. For the purposes of this article I'll call it UpdateGeneratedFiles and assume that it's processing ResourceFile items and emitting a file called GeneratedCode.g.cs. In a real implementation, you should use unique names won't conflict with other targets, items and files.

For example:

<Target Name="UpdateGeneratedFiles"
  Condition=="'@(ResourceFile)' != ''"
    <Compile Include="$(IntermediateOutputDir)GeneratedFile.g.cs" />
    <FileWrites Include="$(IntermediateOutputDir)GeneratedFile.g.cs" />
<Target Name="_UpdateGeneratedFiles"

A quick breakdown:

The UpdateGeneratedFiles target runs if you have any ResourceFile items. It injects the generated file into the build as a Compile item, and also injects a FileWrites item so the file is recorded for incremental clean. It depends on the 'real' generation target, _UpdateGeneratedFiles, so that the file is generated before the UpdateGeneratedFiles target runs.

The _UpdateGeneratedFiles target has Inputs and Outputs set, so that it is incremental. The target will be skipped if the output file exists is newer than all of the input files - the project file and the resource files.

The project file is included in the inputs list because its write time will change if the list of resource files changes.

The _UpdateGeneratedFiles target simply runs a tasks that generates the output file from the input files.

Note that the generated file has the suffix .g.cs. This is the convention for built-time generated files. The .designer.cs suffix is used for user-visible files generated at design-time by the designer.

Hooking into the Build

The UpdateGeneratedFiles target is added to the dependencies of the CoreCompile target by prepending it to the CoreCompileDependsOn property.


This means that whenever the the project is compiled, the generated file is generated or updated if necessary, and the injected Compile item is injected before the compiler is called, so is passed to the compiler - though it never exists in the project file itself.

Live Update on Project Change

So how do the types from the generated file show up in code completion before the project has been compiled? This takes advantage of the way that Visual Studio initializes its in-process compiler that's used for code completion.

When the project is loaded in Visual Studio, or when the project file is changed, Visual Studio runs the CoreCompile target. It intercepts the call to the compiler via a host hook in the the MSBuild Csc task and uses the file list and arguments to initialize the in-process compiler.

Because UpdateGeneratedFiles is a dependency of CoreCompile, this means that the generated file is updated before the code completion system is initialized, and the injected file is passed to the code completion system.

Note that the UpdateGeneratedFiles target has to be fast, or it will add latency to code completion availability when first loading the project or after cleaning it.

Live Update on File Change

So, the generated code is updated whenever the project changes. But what happens when the contents of the ResourceFile files that it depends on change?

This is handled via Generator metadata on each of the ResourceFile files:

  <ResourceFile Include="Foo.png">

This takes advantage of another Visual Studio feature. Whenever the file is saved, VS runs the UpdateGeneratedFiles target. The code completion system detects the change to the generated file and reparses it.

This metadata has to be applied to the items by the IDE (or the user). It may be possible for the build targets to apply it automatically using an ItemDefinitionGroup but I haven't tested whether VS respects this for Generator metadata.

Xamarin Studio/MonoDevelop

But we have another problem. What about Xamarin Studio/MonoDevelop?

Although Xamarin Studio respects Generator metadata, it doesn't have an in-process compiler. It doesn't run CoreCompile, nor does it intercept the Csc file list, so its code completion system won't see the generated file at all.

The solution - for now - is to add explicit support in a Xamarin Studio addin to run the UpdateGeneratedFiles target on project load and when the resource files change, parse the generated file and inject it into the type system directly.


Migrating automatically from a designer-generation system to a build-generation system has a few implications.

You either have to force migration of the project to the new system via an IDE, or handle the old system and make the migration optional - e.g. toggled by the presence of the old files. You have to update the project templates and samples, and you have to build a migration system that removes the designer files from the project and adds Generator metadata to existing files.

June 30

Summer Fun with Xamarin Events in July

It’s July already! The year is already half over, so there won’t be a better time to get out and meet fellow Xamarin developers in your area to learn about crafting beautiful, cross-platform native mobile apps in C# at one of the local conferences, workshops, or user group events happening around the world!



Here are a few events happening this month:

Geek-a-Palooza! ad

  • Sant Julià de Lòria, Andorra: July 4th
  • Hands on Lab: Xamarin & Push Notifications

Mobile & Cloud Hack Day br

  • Hauer, Boqueirão Curitiba, Brazil: July 4th
  • A FREE event on how to create cross-platform apps for Android, iOS, and Windows using C#, Xamarin, and Visual Studio

Seattle Mobile .NET Developers us

  • Seattle, WA: July 7th
  • Introduction to Xamarin.Forms: iOS, Android, and Windows in C# & XAML with Xamarin Evangelist James Montemagno

Montreal Mobile Developers ca

  • Montreal, Canada: July 8th
  • Azure Mobile Services & Mobile Apps with Xamarin

XLSOFT Japan Japan

  • Tokyo, Japan: July 8th
  • Windows Phone / iOS / Android Cross-Platform App Development Using Xamarin.Forms

Birmingham Xamarin Mobile Cross-Platform User Group Flag of the UK

  • Birmingham, UK: July 8th
  • Developing for iBeacons with Xamarin

Introductory Training Session to Xamarin Germany

  • Hanover, Germany: July 13th
  • Xamarin Workshops by H&D International

DC Mobile .NET Developers Group in

  • Washington, DC: July 14th
  • NEW GROUP! Getting Started with Xamarin.Forms by Xamarin MVP, Ed Snider

Sydney Mobile .NET Developers au

New York Mobile .NET Developers us

  • New York, NY: July 28th
  • Building Native Cross-Platform Apps in C# with Xamarin by Xamarin MVP, Greg Shackles


Even more Xamarin events, meetups, and presentations are happening this month! Check out the Xamarin Events Forum to find an event near you if you don’t see an event in your area in the list above.

Interested in getting a developer group started? We’re here to help! Here’s a tips and tricks guide on staring a developer group, an introduction to Xamarin slide deck, and of course our community sponsorship program to get you started. Also, we love to hear from you, so feel free to send us an email or tweet @XamarinEvents to help spread the word and continue to grow the Xamarin community.

The post Summer Fun with Xamarin Events in July appeared first on Xamarin Blog.

New version of Unity Answers unveiled today

Update: We’ve hit some road bumps and have a delay in deploying the new theme with the features listed in this post. The site is currently live so you can still access it while we work on getting the theme set up.

We have been working on improving Unity Answers with the goal of making it easier to uncover authentic questions that need to be answered. We also want to cut down the time it takes to get an answer regardless of the amount of posts that get published daily. Ultimately, we want to make it easier for you to find existing posts that provide solutions to similar issues that you’re experiencing.

A user guide will be provided once the new site is deployed describing the new features and how to navigate the site.

Help Room

We have created a new section called the Help Room, where any user regardless of reputation (amount of karma points) can post questions directly without having to wait for moderator approval. The Help Room will also contain posts where users are asking for more general help with scripting. Moderators will be able to move questions from the default Questions/Home section to the Help Room when needed. If you want to post to the default Questions/Home section, a reputation of 15 KP or more is needed, otherwise your post will have to go through the moderation queue.

Help Room

Other features that are being introduced with this version:

  • When you start typing a Question, a list with suggested existing threads (+ amount of answers) will appear that may already hold the answer you are looking for
  • Autosave while typing questions, answers and comments
  • A new redactor text editor to make it easier to clearly see how to format code and insert screenshots
  • Reward other users with your own karma points for contributing with good answers and asking good questions
  • See how many followers a post has and tag a user who may be able to answer it
  • Follow content and manage them from your user profile
  • User profile and moderation tools will be accessible from the top right corner
Get suggestions while asking Autosaving of drafts Navigate to the Help Room space Reward other users with karma points Access your user profile and other settings

Moderators! You will also get new features:

A space for Moderators has been created to keep track of mod or site-related questions rather than using [META] threads. You will be able to move questions from the default Questions/Home section to either the Help Room or Moderators space.

You will also be able to redirect posts. If you want to redirect a post to another, simply choose that option from the drop-down menu and search for the post which it should redirect to. This way, if you for example find a duplicate post, just redirect it to a more suitable post.

Move a question to a different Space Redirect a question Search for a question to redirect a post to Post has been redirected

We hope you will enjoy the new site, and if there are any questions make sure to post it in the forum thread created for this so we can help you out.

June 29

What’s New in Google Play services

There are a plethora of amazing APIs built right into the core operating system to take advantage of when developing for Android, and Google Play services (GPS) allows for the addition of even more unique experiences.

What is Google Play services?

GPS LogoGPS is a continuously updated library from Google that enables adding new features to Android apps without waiting for a new operating system release. One of the most well known feature of GPS is Google Maps, which allows developers to add rich, interactive maps to their apps. Since GPS is a separate library that’s updated regularly, there are always new APIs to explore. The most recent release, 7.5, has tons of great new features.

Getting Started with Google Play services

In previous releases of GPS, everything was bundled into one huge NuGet and Component Library to be added to an app. However, this has now changed and each API has been broken into its own unique NuGet package, so you can pick and choose which features to add to an app without increasing app size or worrying about linking. To get started with GPS, simply open the NuGet Package Manager and search for “Xamarin Google Play services”. A list of new packages available for Android apps will be displayed, and you can choose to install the “All” package or select only the ones you want.


To learn more about the changes to the GPS packages and APIs, Xamarin Software Engineer Jon Dick’s blog post on the topic is a great place to start.

Once you have GPS installed, you can take advantage of tons of new APIs for your Android app, including some that I’m particularly excited about, outlined below.

Google Fit

fitness_64dpDevelopers of fitness apps will be excited to see that Google Fit has been updated with a brand new Recording and History API that enables gathering estimated distance traveled and calories burned in a simple API. This is in addition to the other APIs already available to discover sensors, collect activity data, and track a user’s fitness.

Android Wear Maps

Maps Android WearUntil now, there wasn’t a good way to show users their current location on a map on their Android Wear devices. The latest release, however, brings the entire Maps API to Android Wear, including support for interactive maps and non-interactive maps in Lite mode.

Google Cloud Messaging Updates

One of my favorite features of GPS has to be Google Cloud Messaging (GCM) for sending push notifications to Android devices, and there have been several updates to GCM in Google Play services 7.5. The new Instance ID tokens enable a single identity for your app across its entire lifetime instead of having a unique registration ID for each device. This simplifies the process of sending push notifications to all of the devices on which an app is installed.

So Much More

These aren’t the only additions to GPS with this release. Several new APIs have been added, including App Invites, Smart Lock for Passwords, and updates to Google Cast. The full list can be found in in the Google Play services documentation.

The post What’s New in Google Play services appeared first on Xamarin Blog.

June 27

Reader Q&A – PDFs in iOS

I got a question from a reader last night who was looking at some code from one of my Xamarin seminars.

Ryan asked about how to extract the content from a pdf file, draw on it, and email it in iOS.

One way to do this is using Core Graphics, as shown in the following snippet:

If you have a question feel free to contact me through my blog. I get lots of questions like this, but I do my best to respond to them all.

June 26

Build and Debug C++ Libraries in Xamarin.Android Apps with Visual Studio 2015

Today, the Microsoft Hyperlapse team shared the story of how they developed their app with C++ and Xamarin. Microsoft Hyperlapse Mobile turns any long video into a short, optimized version that you can easily share with everyone. It can transform a bumpy bike ride video into a smooth and steady time-lapse, like this one from Nat Friedman that was shot using GoPro and processed with Microsoft Hyperlapse Pro.

The core algorithmic portions of Hyperlapse are written in Visual C++ Cross-Platform Mobile and the majority of the app business logic was retained in a .NET portable class library. Using Xamarin, the Hyperlapse team was able to leverage the core C++ code and app logic, while providing C#-based native UIs so users of the app feel at home on each platform. Leveraging C++ code in your Xamarin app is easy, as outlined in the below explanation on implementing C++ in your Xamarin.Android apps.

Using Native Libraries

Xamarin already supports the use of pre-compiled native libraries via the standard PInvoke mechanism. To deploy a native library with a Xamarin.Android project, add the binary to the project and set its Build Action to AndroidNativeLibrary. You can read Using Native Libraries for more details. This approach is best if you have pre-compiled native libraries that support either or all architectures (armeabi, armeabi-v7a, and x86). The Mono San Angeles sample port explains how the ibsanangeles.so dynamic lib and its native methods are accessed in Xamarin.Android.

In this approach, dynamic libraries are typically developed in another IDE, and that code is not accessible for debugging. This imposes difficulty on developers, as it becomes necessary to context switch between code bases for debugging and fixing issues. With Visual Studio 2015, this is no longer the case. Through our collaboration with the Visual C++ Cross-Platform Mobile team at Microsoft, Xamarin developers in Visual Studio now have the power to write, compile, debug, and reference C/C++ projects in Xamarin.Android from within their favorite IDE.

Using Visual C++ Cross-Platform Mobile

As stated above, Visual Studio 2015 supports the development of C/C++ projects that can be targeted to Android, iOS, and Windows platforms. Be sure to select Xamarin and Visual C++ for Cross-Platform Mobile Development during installation.

Visual C++ for Cross Platform Mobile Development

For this post, we’re using the same San Angeles port sample referenced earlier in the Using Native Libraries section. However, its original C++ code is ported to a Dynamic Shared Library (Android) project in Visual Studio. When creating a new project, the Dynamic Shared Library template can be found under Visual C++ → Cross-Platform.

Mono San Angeles Demo

San Angeles is an OpenGL ES port of a small, self-running demonstration called “San Angeles Observation.” This demo features a scenic run-through of a futuristic city with different buildings and items. The original version was made for desktop with OpenGL, and the current version is one of Google’s NDK samples optimized for Android. The source code is available here, ported to Visual Studio.

Now that the Dynamic Shared Library that contains the source code has been directly referenced from the Xamarin.Android project, it works as smoothly as any other supported project reference.

Visual Studio 2015 VC++ Cross-Platform Mobile

To interop with native libraries in your Xamarin.Android project, all you need to do is create a DllImport function declaration for the existing code to invoke, and the runtime will handle the rest. Set the EntryPoint to specify the exact function to be called in the native code.

[DllImport ("sanangeles", EntryPoint = "Java_com_example_SanAngeles_DemoGLSurfaceView_nativePause")]
static extern void nativePause (IntPtr jnienv);

Now, to call the native function, simply call the defined method.

public override bool OnTouchEvent (MotionEvent evt)
	if (evt.Action == MotionEventActions.Down)
	nativePause (IntPtr.Zero);
	return true;

Refer to Interop with Native Libraries to learn more about interoperating with native methods.

One More Thing…

Now that you have access to the native source code, it’s possible to debug the C/C++ code inside Visual Studio. To debug your C/C++ files, choose to use the Microsoft debugger engine under the Android Options of Project properties.

VC++ Native Debugging options

Enable a breakpoint inside your C++ project, hit F5, and watch the magic happen!

Learn More

Refer to the VC++ team’s blog post at MSDN for a step-by-step guide to building native Android apps in Visual Studio 2015. The source code for the Mono San Angeles port explained in this post is available for download in our samples.

Discuss this post in the Xamarin Forums.

The post Build and Debug C++ Libraries in Xamarin.Android Apps with Visual Studio 2015 appeared first on Xamarin Blog.

Leveraging Unity Cloud Build for testing large projects

This is a story about how we are using Unity Cloud Build internally and how it can make life easier for you, our users, as well. Read on to learn how we used to deal with large project testing and which awesome new possibilities are available now!

Once upon a time

During development of the massive Unity 5 release, and the extensive changes it entailed, we frequently ran into issues with importing and building projects. For Unity 5 we wanted a cleaner API, which in some cases meant we had to break backwards compatibility. Often, when importing an older project in Unity 5, we had to fix scripts manually. We were also hitting major bugs and regressions in graphics, physics and performance related areas.

Our testers do a very good job at making sure that projects import, build and run properly in every new build on all our supported platforms, but since we are constrained by time we usually used small projects (like AngryBots or Nightmares) to run these tests. These projects don’t cover many of Unity’s features – any of which might break in a new version – and they are nowhere close to the size and complexity of some of the projects developed by our users.

We were fortunate enough to have a few major studios share the full project folders for some of their completed games with us (for example, Republique) and we started manually importing and building these games on every new build during the beta and release candidate phases of development. We found and fixed many issues before any of our users would have been affected by them, but it was a tedious and time-consuming task.

This is how the testing process worked back then:

  1. Install the new Unity build on Windows and OSX.
  2. Open a large project and reimport it. Wait a long time and check back from time to time to see if it has finished (often the Script Updater dialog would require a confirmation prompt which meant even more waiting).
  3. Fix any scripts or other issues and build the game for Mac and Windows Standalone.
  4. Switch platform to Android. Wait a long time again and check to see when it is finished.
  5. Run the game on Android.
  6. Switch platform to iOS. Wait again.
  7. Run the game on iOS.
  8. Repeat from steps 2-7 for all the other large projects we had (7 in total).

This was usually done by one person and it could take a few days.

The glorious present

We quickly started discussing how we could automate some of this. While we were busy figuring that out, in another part of the company work was underway for the official release of what is now Unity Cloud Build. Cloud Build seemed like a perfect fit for automating testing of these large projects.

Fast forward to the present day,  and after we released Unity 5.1, and Cloud Build has been out there for a while, testing large projects goes through an entirely different process (described below with pictures for your viewing pleasure):

1. Add project to Cloud Build

2. Press build for all supported platforms (currently Webplayer, Android and iOS with standalones and WebGl in the pipeline)


3. Receive an e-mail notification from Cloud Build when everything is finished

4. Share the build link with any/all testers, open the link in the browser, install the app, run and profit!

This saves us a tremendous amounts of time, since all it requires to make the build is a few clicks and all builds for all projects are done in parallel. As soon as the builds are ready we receive the notification e-mail, and testing can begin on all platforms. If anything fails during importing or building we also get a notification and we can act on it immediately.

Unity Cloud Build can also be configured to automatically poll a repository for changes and rebuild the projects automatically. It can rebuild projects on any supported Unity version.

A brighter future

Since we want to be able to scale up to testing more projects, the limiting factor now is running the builds on devices. The more projects we have in Unity Cloud Build, the more builds we have to install and run on devices.

The biggest problem with testing on mobile is device fragmentation (especially on Android) and we can only test on a few of the most popular models. We would like to know how these builds run on most devices (including some of the more esoteric ones). To that end, we are currently investigating services like TestDroid and AppThwack.

These services give us access to hundreds of devices and we can run our projects on any number of them. They offer a REST API that we can use to feed builds from Unity Cloud Build directly into them. What we get in return is performance data (CPU, Memory, Threading), screenshots of the game while running on the device, the ability to run custom testing scripts, get device logs and more.

By feeding all this data back into our own data warehouse, we can keep track of metrics across Unity versions, projects and devices and quickly pinpoint performance, rendering, and input issues.

Example test run results from our Doll Demo project on TestDroid

Unity Cloud Build and you

Unity Cloud Build is the solution of choice for us when it comes to removing all the time-consuming tasks and bottlenecks involved in importing projects to newer Unity builds, switching platforms and building projects. But we built Unity Cloud Build first and foremost with you, our users, in mind. If you are just getting started on a new Unity project, sign up for the free Cloud Build option and see how easy it is to have us do the heavy lifting for you and share the final build results with your entire team. If you are a veteran Unity user working on one or multiple projects, you will find something suitable for you in one of our other licensing options.

So, what are you waiting for? Give Unity Cloud Build a try! It might just be the best thing that happened to you since we introduced the Cancel Build button!

June 25

MethodHandle Performance

Last time I mentioned that with the integration of OpenJDK 8u45 MethodHandle performance went from awful to unusable. That was pretty literal as the JSR 292 test cases that I regularly run went from taking about 8 minutes to more than 30 minutes (when my patience ran out).

Using sophisticated profiling techniques (pressing Ctrl-Break a few times) I determined that a big part of the problem was MethodHandle.asType(). So I wrote a microbenchmark:

   IKVM 8.0.5449.1  IKVM 8.1.5638
asType.permutations(1) 2108 9039
asType.permutations(2) 2476 17269

The numbers are times in milliseconds. Clearly not a good trend. I did not investigate deeply what changed in OpenJDK, but after looking at the 8u45 code it was clear that too many intermediate MethodHandles were being created. So I rewrote asType to create a single LambdaForm to do all the work at once. This improved the performance a bit, but the disturbing increase in time for the second iteration was still there. Once again I decided not to investigate the root cause of this, but simply to assume that it was because of anonymous type creation (the CLR has no anonymous types and creating a type is relatively expensive).

Avoiding anonymous type creation turned out to be easy (well, the high level design was easy, the actual implementation took a lot more time). I just had to replace the LambdaForm compiler. There is a single method that represents the exact point where I can come in and change the implementation:

static MemberName generateCustomizedCode(LambdaForm form, MethodType invokerType) { ... }

In OpenJDK this method compiles the LambdaForm into a static method in an anonymous class and returns a MemberName that points to the static method. All I had to do was replace this method with my own implementation that directly generates a .NET DynamicMethod. As I said before, the idea was simple, actually getting the implementation correct took a couple of weeks (part time).

With both these optimizations in place, MethodHandle performance is back to awful (actually, it is less afwul than it was before):

   IKVM 8.0.5449.1  IKVM 8.1.5638  IKVM 8.1.5653
asType.permutations(1) 2108 9039 314
asType.permutations(2) 2476 17269 210

The running time of the JSR 292 test cases went down to less than 7 minutes. So I was satisfied. There are many more opportunities to improve the MethodHandle performance on IKVM, but so far no IKVM user has complained about it, so it is not a priority. Note that Java 8 lambdas are not implemented using MethodHandles on IKVM.


  • Fixed performance bug. Base type of java.lang.Object was not cached.
  • Untangled TypeWrapper.Finish() from member linking to improve Finish performance for already compiled types.
  • Improved MethodHandle.asType() performance by directly creating a single LambdaForm to do the conversion, instead of creating various intermediate forms (and MethodHandles).
  • Make non-public final methods defined in map.xml that don't override anything automatically non-virtual.
  • Optimized LambdaForm compiler.
  • IKVM.Reflection: Added Type.__GetGenericParameterConstraintCustomModifiers() API.

Binaries available here: ikvmbin-8.1.5653.zip

June 24

Unite Europe 2015 Keynote Wrap up

It’s been awhile since we held a Unite in Europe, so we were thrilled to be hosting another event in beautiful Amsterdam. Against the backdrop of the impressive Westergasfabriek, the event has brought together over 1000 attendees from the gaming community, hailing from many different countries, to share ideas, learn best practices, and help each other make ridiculously cool games.

The show kicked off today with the keynote when John Riccitiello took the stage to outline Unity’s three key guiding principles: 1) democratize game development; 2) solve hard problems; and 3) help developers succeed. All of the decisions we’re making and directions we’re heading are based on these three ideas. That means hiring more talent so that we can take more of the ugly work away from you and let you focus on making games, and building new tools, services and initiatives to help you guys find success.

Following an inspiring message from our good friend Rob Pardo, Jussi Laakkonen took the stage to walk us through how Unity Ads is helping developers make money and increase engagement using Seriously Games’ Best Fiends as a successful example of ads done right. Jussi closed his portion of the keynote with the announcement that Unity Ads would be integrated into the Unity engine with the release of 5.2 this fall.

Unity Ads3

Unity Analytics’ John Cheng then took the stage to demonstrate the upcoming live dashboards displaying the massive amounts of data streaming into our analytics system which can be used to understand the greater market and make smart decisions. Also demonstrated were tools used to understand players and make positive adjustments to games. Heat maps allow users to overlay a visualization in-editor representing player activity while the Funnel Analyzer can show engagement levels and help iron out level design. Using the awesome mobile physics puzzler Ultraflow as an example, John showed how Ultrateam were able to identify and adjust a level to increase player engagement and overall user base.

Screen Shot 2015-06-23 at 16.32.19

Best of all, getting analytics hooked up in your game is easy and was made available in 5.1. As demonstrated on stage, its as simple as pasting your Cloud Project ID into the editor. You can learn more about Unity Analytics at http://unity3d.com/services/analytics.

As the presentation shifted to technical aspects of the engine and editor, Joachim Ante took the stage to dive a bit farther into the Blacksmith demo. Following the demonstration was the news that many of the assets from The Blacksmith are now available to use on the Asset Store. Additionally The Blacksmith runtime demo is now available for download so that you can check it out on your own computer. For more information on The Blacksmith project, visit the site.

Joachim handed the mic over to Lucas Meijer, who continued the technical portion of the show with a rundown of the recent changes to Unity 5.1, showing off how the new Unity networking solution and discuss features to make VR and AR development within Unity incredible for years to come.

Finally, Lucas also had the pleasure of announcing an exciting bit of news. Unity now has a Public Roadmap! We know it’s something that you’ve wanted for a while and we’re all very happy that you now have it. For more information on that roadmap, we suggest you wander over to our blog post all about it.


Lucas then passed the baton to Thomas Peterson, who discussed the challenges of testing and QA when creating a complex piece of software like Unity. Through our sustained engineering program and impressive suite of automation tools, we’ve taken great steps in a very positive direction. One of the things we’re very proud of is that companies like Intel, ARM, Qualcomm, Sony, and Oculus are all using these tools to ensure you’re all getting the best experience possible on their hardware.

John came back on stage to introduce our awesome guest speaker, Mariina Hallikainen, CEO and co-founder of and found great success doing it. Mariina was nice enough to share valuable information about their journey to create the game.

Colossal Order is a company that embodies the spirit of democratization. Their team of 9 built a game, Cities Skylines, that directly challenged one of the most venerable franchises in PC gaming. It’s fairly fitting then that David Helgason took the stage to thank Mariina and take a nostalgic look back at the first 10 years since Unity 1 launched at WWDC in 2005 by introducing a special 10 year anniversary highlight reel showcasing games from the past, present and future.

After a short message from Unity’s Andy Touch detailing some housekeeping issues for show attendees, the keynote ended and sessions began. As you may already know, Unite isn’t one singular event, and we’ve already held successful shows in Tokyo, Bangkok, Beijing, Seoul, and Taipei – and we’re not done yet! Our marquee conference, Unite Boston is taking place in September along with the Unity Awards and finally, Unite Melbourne (date TBD). Hopefully we’ll see you there!

IMG_8811 _DSF0401 IMG_8889 Unity Analytics helps game designers with actionable data. _DSF0509 IMG_8967 IMG_9137 IMG_8838 Unity Networking in action. IMG_9093 _DSF0568 _MG_7314 CEO of Colossal Order on Cities: Skylines Ten years since Unity 1.0! _DSF0700 Our principles _DSF0683 19074230039_98710b7b8d_o IMG_9375 IMG_9585 IMG_9512 IMG_8769 IMG_8746 IMG_8543 _MG_7719 _MG_7679 _MG_7568 Unite_Europe_2015_Group-photo2

Unity Roadmap

At Unite Europe 2015, we unveiled our public roadmap. We realize that our users have been wanting more information for some time. To address this, we carefully considered the best format for presenting this information,  then assembled it in a way we hope is most useful to all of you.


For the past 10 years, Unity developed in a more organic fashion with an eye for feature work usually trumping schedules or deadlines. Without having some sense of regularity, a roadmap becomes some distant reality that is hard to subscribe to. Since shipping 5.0 and now 5.1, we are comfortable committing to a more regular rhythm of quarterly releases. In this commitment, a roadmap schedule now becomes a useful tool for everyone involved.


The roadmap is a tool for Unity users to be able to reasonably predict what feature set they could work with/commit to when starting a project in the near term. With that in mind, our goal is to lay out the anticipated and probable work arriving in the next 9 months.

We aware our community has interests in what important things are currently being worked on, and sometimes those timelines are beyond 9 months. Additionally, we are still a finite number of often specialized engineers and prioritization does take place.  Also, taking any feature and multiplying it by 22 platforms and the complexities of making it easy to use for a broad span of users simply take time.

In any case, we’ve organized the roadmap into five groups. First, we will display the next three upcoming releases with specified dates. For the very next release, we’ll also color code to show the confidence of the work item making the final cut and shipping. After trying something out in alpha and early beta, there is always a chance that the feature just really isn’t ready for prime-time. In that case, it could be delayed a version, or even kicked back to the drawing board. Everything of course is case-by-case.

The “Development” work are in-progress with a clear plan and dedicated engineering effort to move it towards release. However, the time at which the work may complete may exceed the 9 months of listed releases or other externalities may prevent us from specifying a particular release. For example of these externalities, we list WebGL 2.0 in “Development” since the technology in the industry is still evolving and we depend on the browser technology to be available in the general public.

Finally, we are left with “Research” which contains all the prototypes, design phase, and other items which are getting actual time, however they are not in any condition to be called earnest development where there is a solid plan being worked against.

We will aim for a weekly update to the roadmap contents, and will look to further refine the presentation.

If you would like to up-vote any listed feature or add a feature for consideration, please head over to feedback.unity3d.com.

Let us know what you think!

The Blacksmith Releases: Executable, Assets, Tools and Shaders

Hey. We’ve promised to release the assets and our custom project-specific tools and shaders from The Blacksmith realtime short film. Well, here they are.

First of all, you are welcome to download the executable of the demo.

The assets from the project, along with the custom tech, come in two main packages for your convenience: ‘The Blacksmith – Characters’ and ‘The Blacksmith – Environments’.

Below follows more information about what you can expect from each package, as well as some explanations about specific choices that we made. We’ve tried to not just release something of documentary value, but also make it more usable for you, in case you want to do something with it yourselves.

And let me answer straight away a very popular question you’ve been asking on previous occasions when we’ve released demo materials: yes, you can use all of this in whatever way you like, including in commercial projects. The standard Asset Store license applies. It would be our pleasure if you find the releases helpful on your own way to achieving success.

The Executable

We’ve added some simple interface for control over the playback of the demo:

  • Interacting with slider will allow you to scrub back and forth
  • Clicking on play-pause button, or anywhere else outside the UI, will toggle playback mode of the demo to playing or paused
  • Slight look-around is possible by pausing the playback and moving the mouse pointer.
  • A ‘mute’ button to toggle audio of the demo

You have the option to choose among four quality settings presets:

  • Low – Recommended for machines which are not able to run the demo in higher settings
  • Medium – Recommended for high-end laptops and less powerful desktop PCs. It runs at 30 FPS in 720p on a Laptop (Quad Core i7 2.5GHz with GeForce GT 750M)
  • High – Recommended for most desktop PCs. It runs at 30 FPS in 1080p on a desktop PC (Core i7 4770 with a GeForce GTX 760)
  • Higher – If you have a card that is better than GTX 760

It could take more than 30s to load, depending on your platform.

‘The Blacksmith – Characters’ Package

This project contains:

  • The Blacksmith character
  • The Challenger character
  • Hair Shader
  • Wrinkle maps
  • Unique character shadows
  • Plane reflections

Download ‘The Blacksmith – Characters’ package from here.

Blacksmith and Challenger

We have placed each character in a separate scene.

We have re-skinned the Challenger character so now it is a little better prepared for a more universal usage than the specific needs of our film. You are welcome to drop him in another environment and experiment. You still need to do some work if you want him to animate nicely. We have included sample animations ‘idle’ and ‘walk’.


You will also find the main character, Blacksmith, in this package. We haven’t done any re-skinning on him: he is more complex than Challenger and would have taken us more time. We are including the original 3D character asset and you are free to use it in any way you like.


We are including the original, full-size 4K textures of both characters. In our project, we use a smaller version – 2K or less – of some of the textures.

The characters’ models and textures were created by Jonas Thornqvist and Sergey Samuilov.

Hair rendering

To achieve those anisotropic highlights that are so characteristic in hair, we decided to create a separate hair shader. As a complement to this, we also added a rendering component that calculated ambient occlusion for the hair, and set up a multi-pass render approach to avoid sorting errors between overlapping, translucent hair polygons.

The hair package and an accompanying example scene can be downloaded from the asset store. Don’t forget to check out the readme for more details about how it works, and how to configure it for your own projects.

Wrinkle maps

To add more life to the Challenger’s expressions in The Blacksmith, we added a rendering component for blending ‘wrinkle maps’ based on the animated influence weights of the Challenger’s facial blendshapes. The rendering component blends normal and occlusion maps in an off-screen pre-pass, and then feeds these to the standard shader as direct replacements for the normal and occlusion maps assigned in the material.

Available as a separate package here, and a dedicated blog post about it can be found here.

Unique character shadows

We wanted to make sure our characters always had soft, high-resolution shadows in close-up shots. We also needed to make sure we had enough shadow resolution to cover the rest of the world. To achieve this, we added a method of setting up a unique shadow map for a group of objects.

Unique shadows can also be grabbed as a separate package from the Asset Store, and more details are available in the dedicated blog post.

Plane reflections

Plane reflections in The Blacksmith are, in essence, the kind of planar reflections one would normally render for reflective water surfaces. The twist is that once rendered, we convolve the reflected image into each mip-level of the target reflection texture. During this convolution, we use the depth information of the reflection to force sharper contacts for pixels close to the reflection plane. The goal of this contact sharpening is to simulate the effect ray tracing would have in non-perfect reflections. The result of this convolution is a reflection texture suitable as a drop-in replacement for reflection probe cubemaps, with the material’s roughness still dictating which of the different mip-levels are sampled for reflection. We use a modified Standard Shader that, based on a shader keyword toggle, samples reflections from this dynamic reflection texture instead of reflection probe cubemaps.

‘The Blacksmith – Environments’ Package

This project contains:

  • The Smithy
  • The Exterior
  • Atmospheric Scattering
  • PaintJob tool (paint vegetation on any surface)
  • Vegetation system
  • MaskyMix Shader
  • Modified Standard Shader
  • Tonemapping

You can toggle between FPS and animated camera (press C) and between several light presets (press V).

Download ‘The Blacksmith – Environments’ package from hereBe warned, it is quite large.

The Smithy

Looking like this:

Game view of Smithy interior in ‘The Blacksmith - Environments’ package

Game view of Smithy interior in ‘The Blacksmith – Environments’ package

The Exterior

We decided to re-arrange the original exterior from the movie in order to make it more relevant to game production, which we hope would be of more use to you.
We haven’t gone too far with the polishing. If you spend some additional time on it, it could be a good basis for some action of your own. Here is how it looks as we’ve provided it to you:


The difference is that, for our film, we had arranged the environment according to the cameras. If you are still curious, here is a screenshot of how the environment of the original project looks in Scene view:

Scene view of the original Blacksmith project

Scene view of the original Blacksmith project

There were assets we didn’t use in the rearranged scene, but we still wanted you to have them. You will find them in the project.

Almost all of the gorgeous assets in this package were created by Plamen ‘Paco’ Tamnev. He also built the exterior environment scene.

Atmospheric Scattering

Our custom Atmospheric Scattering solution comes in this project. Find more details about it in this dedicated blogpost. For your convenience, we have also uploaded the Atmospheric Scattering as a separate package on the Asset Store. This way you don’t have to extract it yourself from this rather big project. Get it here.

PaintJob tool

This tool allows the artist to paint vegetation projected onto any geometry, not just Unity Terrains. It was a way for us to to explore how we could make the most out of the built-in Unity terrain tools, while also fulfilling one of the requirements we had for the project.

You are welcome to extract it from ‘The Blacksmith – Environments’ and use it in your own projects.

Vegetation system

There were a couple of things that we wanted to do with vegetation in The Blacksmith: we wanted it to be soft; we wanted custom shading on it; we wanted it to support dynamic GI; we wanted it to blend with whatever it considered to be ground; and we wanted it to work without being forced to use Unity terrains. PaintJob already took care of the latter, but to solve the rest we needed to do a little bit of a custom setup. We decided to build a component that would capture the PaintJob data – as well as any other hand-placed vegetation marked for capturing – and generate a number of baked vegetation meshes for which we could retain full control over rendering. Among other things, this allowed us to apply any kind of custom sorting we wanted, or reproject light probe data into dynamic textures.

MaskyMix Shader

Maskymix is an extended Standard Shader that mixes in an additional set of detail textures based on certain masking criteria – hence the name MaskyMix. The masking is primarily based on the angle between a material-specified world-space direction, and the per-pixel normal sampled from the base normal map. The mask is also modified by a tiled masking texture, as well as the vertex-color alpha channel of the mesh – if present. Depending on the masking thresholds specified in the material, the additional detail layer is mixed in based on this final mask. If the mesh provides a vertex alpha channel for masking, the vertex color can optionally be used for tinting the detail layer albedo.

Modified Standard Shader

The Blacksmith used quite a few tiny Standard Shader modifications to tweak things to our liking, or add small shader features that we wanted. Not all of these make sense outside of the main project, but some of them are included in the surface-shader based Standard Shader used in this package. These modifications are typically things like: optionally sampling per-pixel smoothness from the albedo alpha channel instead of a dedicated texture, or being able to control the culling mode of any material, or having additional control over the color and intensity of bounced global illumination.


We’ve explained in an earlier blogpost how we used Tonemapping and applied Color Grading for The Blacksmith short film. Since the short film was shown, the Tonemapping was taken up by a Unity engineering team, and is now being developed properly into Unity.

The HDR sky textures in the package are from NoEmotionHDRs (Peter Sanitra) / CC BY-ND 4.0. Used without modification.

That’s all from us for now. Having delivered everything we promised, we’re ready to go off to new adventures.

If you do something with our assets, we are very curious to know about it. It would be nice of you if you post it as a comment here or drop us a line at demos@unity3d.com. And if it is something which others could use, please consider sharing back to the community.

Have fun!

Edit: Here are the links to the blogposts where we explain specific systems in more detail:

Wrinkle Maps in The Blacksmith

Unique Character Shadows in The Blacksmith

Atmospheric Scattering in The Blacksmith

June 23

Experiment on Roslyn C# compiler: Translatable Strings

Basically anyone can use resources, gettext or managed-commons-core, to translate (localize) strings in its C# code, and it can even be kind of terse like this sample using managed-commons-core:
using System.Collections.Generic;
using Commons.GetOptions;
using static System.Console;
using static Commons.Translation.TranslationService;

namespace TestApp
class AppCommand
// Returns the translated form of "First mock command"
public virtual string Description { get { return _("First mock command"); } }

public virtual string Name { get { return "alpha"; } }

// Returns the translated form of "Command {0} executed!" with Name substituted
public virtual void Execute(IEnumerable args, ErrorReporter ReportError)
WriteLine(TranslateAndFormat("Command {0} executed!", Name));
Then enters C# 6.0 with its fantastic new feature interpolated strings and now that last method can't be optimized to use the new feature because:
public virtual void Execute(IEnumerable<string> args, ErrorReporter ReportError)
WriteLine(_($"Command {Name} executed!"));
would in truth first format and then try to lookup a translation, which would be truly the wrong thing to happen...
This experiment would allow for C# 7 a new syntax for translatable strings that would make that snippet into:
using System.Collections.Generic;
using Commons.GetOptions;
using static System.Console;

namespace TestApp
class AppCommand
// Returns the translated form of "First mock command"
public virtual string Description { get { return $_"First mock command"; } }
public virtual string Name { get { return "alpha"; } }
// Returns the translated form of "Command {0} executed!" with Name substituted
public virtual void Execute(IEnumerable args, ErrorReporter ReportError)
WriteLine($_"Command {Name} executed!");
Interpolated strings can return an IFormattable, and thus one can do some localization (number formatting for instance), but not truly translation, so this feature is interesting beyond the small gain on shortening code, for the other cases.
But the killing feature that adding this to compiler would allow is to have the extraction of translatable texts done by the compiler, as it does for xml documentation, if the right command line parameter is specified.
$_"Command {Name} executed!" would be extracted as "Command {0} executed!", automagically.
All is well but some may ask as this, which looks a lot like the way gettext does things would work for extracting to a .resx file, where keys can't be arbitrary strings. Well for this scenario the compiler would generate SHA1 hashes as keys and insert the hashing while calling the TranslationService behind the scenes. TranslationService is a pluggable infrastructure that can have 'translators' sourcing their translations on resources, .mo files, hard-coded dictionaries, whatever...
My experimentation will use managed-commons-core, which I'm the core developer/maintainer, as the backend but if real merit is found on this discussion, surely the runtime team will have to come forward and implement something like it, or just borrow the logic which MIT-licensed from my implementation there.
  1. Code
  2. Issue

Android M Preview Now Available

Today, we’re excited to announce the preview release of Xamarin.Android featuring support for Android M’s developer preview. Android M is an exciting release, that introduces several new features for Android developers including new app Permissions, Fingerprint Authorization, enhanced Notifications, Voice Interactions, and Direct Sharing.

Android M

Installing the Android M Preview

  • Starting with Android Lollipop, Java JDK 1.7 is required to properly compile apps. You can download one for your system from Oracle’s website.
  • Update your Android SDK Tools to 24.3.3 from the Android SDK Manager
  • Install the latest Android SDK Platform and Platform and Build-tools


  • Download the Android M (API 22, MNC preview) SDKs
  • mnc-preview-packages

Getting Started

With this preview installed, you should have the new APIs available to use in your Xamarin.Android apps. Check out our release notes, download links, and more details on our Introduction to Android M documentation to guide you through setup and the new APIs.

The post Android M Preview Now Available appeared first on Xamarin Blog.


Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.