Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

May 26

Introducing the Xamarin Podcast

Today, I’m excited to announce a new season of the Xamarin Podcast. The Xamarin Podcast makes it easier, and more enjoyable, to learn everything new in the world of C#, .NET, and mobile development. Be sure to download the first two episodes of the Xamarin Podcast today and subscribe to ensure that you never miss any announcements, interesting blog posts, projects, or tips and tricks from fellow developers.

podcast icon

We are two episodes in to this season, and in the latest episode Pierce and I were joined by fellow Xamarin Developer Evangelist James Montemagno to discuss plugins for Xamarin. James gives us the run down on why you should use plugins and how to go about developing your own. We also discuss HomeKit, the latest Xamarin Profiler, new components for Google Services, and upcoming events.

 

Get Involved

Do you have an interesting story, project, or advice for other .NET mobile developers? If so, Pierce and I would love to share it with the Xamarin community! Tweet @XamarinPodcast to share your blog posts, projects, and anything else you think other mobile developers would find interesting.

Subscribe or Download Today

To make it easier than ever to hear the latest news in the .NET, C#, and Xamarin realm, the Xamarin Podcast is available from both iTunes and SoundCloud. Be sure to download the first two episodes today, and don’t forget to subscribe!

Download today

 

The post Introducing the Xamarin Podcast appeared first on Xamarin Blog.

Bringing DirectX 11 features to mobile in Unity 5.1

One of the new features in Unity 5.1 is a new unified OpenGL rendering backend.

… A unified what now?

Until now, we had a separate renderer for OpenGL ES 2.0, one for OpenGL ES 3.0 (that shared a good deal, but not all, code with ES 2.0) and then a completely different one for the desktop OpenGL (that was stuck in the OpenGL 2.1 feature set). This, of course, meant a lot of duplicate work to get new features in, various bugs that may or may not happen on all renderer versions etc.

So, in order to get some sense into this, and in order to make it easier to add features in the future, we created a unified GL renderer. It can operate in various different feature levels, depending on the available hardware:

  • OpenGL ES 2.0
  • OpenGL ES 3.0
  • OpenGL ES 3.1 ( + Android Extension Pack)
  • desktop OpenGL: all versions from 2.1 to 4.5 (desktop OpenGL support is experimental in 5.1)

All the differences between these API versions are baked into a capabilities structure based on the detected OpenGL version and the extensions that are available. This has multiple benefits, such as:

  • When an extension from the desktop GL land is brought to mobiles (such as Direct State Access), and we already support that on desktop, it is automatically detected on mobiles and taken into use.
  • We can artificially clamp the caps to match whichever target level (and extension set) we wish, for emulation purposes.
  • Provided that the necessary compatibility extensions are present on desktop, we can run GL ES 2.0 and 3.x shaders directly in the editor (again, still experimental in 5.1).
  • We get to use all the desktop graphics profiling and debugging tools against the OpenGL code already on the desktop and catch most of the rendering issues there.
  • We do not need to maintain separate diverging codebases, bugs need to only be fixed once and all optimizations we do benefit all the platforms simultaneously.

Compute shaders

Cocuy 2D fluid simulation package from the Unity Asset Store running on OpenGL ES 3.1. No modifications needed.

One of the first new features we brought to the new OpenGL renderer is compute shaders and image loads/stores (UAVs in DX11 parlance). And again, as we have an unified codebase, it is (more or less) automatically supported on all GL versions that support compute shaders (desktop OpenGL 4.3 onwards and OpenGL ES 3.1 onwards). The compute shaders are written in HLSL just as you’d do on DX11 in previous versions of Unity, and they get translated to GLSL. You’ll use the same Graphics.SetRandomWriteTarget scripting API to bind the UAVs and the same Dispatch API to launch the compute process. The UAVs are also available on other shader stages if supported by the HW (do note that some, usually mobile, GPUS have limitations on that, for example the Mali T-604 in Nexus 10 only supports image loads/stores in compute shaders, not in pixel or vertex shaders).

Tessellation and Geometry shaders

GPU Tessellation running on OpenGL ES 3.1

Both tessellation and geometry shaders from DX11 side should work directly on Android devices supporting Android Extension Pack. The shaders are written as usual, with either #pragma target 50 or #pragma target es31aep (see below for the new shader targets), and it’ll “just work” (if it doesn’t, please file a bug).

Other goodies

Here’s a short list of other things that are working mostly the same as on DX11

  • DrawIndirect using the results of compute shader via append/consume buffers. The API is the same as the DX11 features are currently using.
  • Advanced blend modes (dodge, burn, darken, lighten, etc) are exposed whenever the KHR_blend_equation_advanced extension is supported by the GPU. The extension is part of the Android Extension Pack, and can be found on most semi-recent desktop GPUs as well as the high-end mobile ones (Adreno 4xx, Mali 7xx, nVidia K1+). DirectX 11 does not support these blend modes. These can be set both from the scripting API and from ShaderLab shaders. The new blend mode enums can be found from the UnityEngine.Rendering.BlendOp documentation.

Differences from DX11

There are some differences to the feature set available in DX11, apart from things discussed above:

  • The mobile GPUs have fairly limited list of supported UAV formats: 16- and 32-bit floating point RGBA, RGBA Int32, 8-bit RGBA, and single-channel 32-bit Int and floating point formats. Notably, the 2-channel RG formats are not supported for any data type. These formats are available on desktop GL rendering, though.
  • GL ES 3.1 does not support any other HLSL Shader interpolation qualifiers than ‘centroid’, all other qualifiers are ignored in ES shaders.
  • GL ES 3.1 still does not mandate floating-point render targets, although most GPUs do support them through extensions
  • The memory layout for structured compute buffers have some minor differences between DX11 and OpenGL, so make sure your data layouts match on both renderers. We’re working on minimizing the impact of this, though.

Shader pipe

The shader compilation process for ES 2.0 and the old desktop GL renderer (and, until now, for ES3.0 as well) is as follows:

The problem with this is that neither of the modules above support anything later than Shader Model 3.0 shaders, effectively limiting the shaders to DX9 feature set. In order to compile HLSL shaders that use DX11 / Shader Model 5.0 features, we are using the following shader compilation pipeline for GL ES 3.0 and above, and for all desktop GL versions running on unified GL backend:

The new shader pipeline seems to be working fairly well for us, and allows us to use the shader model 5.0 features. It also can benefit from the optimizations the D3D compiler performs (but also all the drawbacks of having a bytecode that treats everything as vec4’s, always). As a downside, we’ll have a dependency to the D3D compiler and the language syntax it provides, so we’ll have to go through some hoops to get our Unity-specific language features through (such as sampler2D_float for sampling depth textures).

Existing OpenGL ES 3.0 (and of course, OpenGL ES 2.0) shaders should continue to work as they did previously. If they do not, please file a bug.

So, how can I use it?

For Unity 5.1 release, we are not yet deprecating the legacy OpenGL renderer, so it will still be used on OS X and on Windows when using the -force-opengl flag. The desktop GL renderer is still considered very experimental at this point, but it will be possible to activate it with the following command line arguments for both the editor and standalone player (currently Windows only, OSX and Linux are on our TODO list):

  • “-force-glcore” Force best available OpenGL mode
  • “-force-glcoreXY” Force OpenGL Core X.Y mode
  • “-force-gles20″ Force OpenGL ES 2.0 mode, requires ARB_ES2_compatibility extension on desktop
  • “-force-gles30″ Force OpenGL ES 3.0 mode, requires ARB_ES3_compatibility
  • “-force-gles31″ Force OpenGL ES 3.1 mode, requires ARB_ES3_1_compatibility
  • “-force-gles31aep” Force OpenGL ES 3.1 mode + Android Extension Pack feature level, requires ARB_ES_3_1_compatibility and the extensions contained in the AEP (if used by the application)

Remember to include the corresponding shaders in the Standalone Player Settings dialog (uncheck the “Automatic Graphics API” checkbox and you’ll be able to manually select the shader languages that will be included).

Note that these flags (including the ES flags) can also be used when launching the editor, so the user will see the rendering results of the actual ES shaders that will be used on the target. Also note that these features are to be considered experimental on desktop at this stage, so experiment with these at your own risk. In 5.1, you can also use the standalone player to emulate GL ES targets: In Player settings just make sure you include GL ES2/3 shaders in the graphics API selection and start the executable with one of the -force-glesXX flags above. We’re also working on getting this to function on Linux as well.

There are some known issues with running ES shaders on the desktop: Desktop and mobiles use different encoding for normal maps and lightmaps, so the ES shaders expect the data to be in different encoding than what’s being packaged alongside the standalone player build. The OpenGL Core shader target should work as expected.

On iOS, the only change is that the ES 3.0 shaders will be compiled using the new shader pipeline. Please report any breakage. ES 2.0 and Metal rendering should work exactly as before. Again, please report any breakage.

On Android, if the “Automatic Graphics API” checkbox is cleared, you can select which shaders to include in your build, and also set manifest requirements for OpenGL ES 3.1 and OpenGL ES 3.1 + Android Extension Pack (remember to set your required API level to Android 5.0 or later as well). The default setting is that the highest available graphics level will always be used.

AN IMPORTANT NOTE:

Apart from some fairly rare circumstances, there should never be any need to change the target graphics level from Automatic. ES 3.1 and ES 3.0 should work just as reliably as ES 2.0, and if this isn’t the case, please file a bug. (Of course it is possible to write a shader using #pragma only_renderers etc that will break on ES3 vs ES2 but you’ll get the idea.) Same applies to the desktop GL levels once we get them ready. The Standard shader is currently configured to use a simpler version of the BRDF on ES 2.0 (and also cuts some other corners here and there for performance reasons), so you can expect the OpenGL ES 3.0 builds to both have more accurate rendering results and have slightly lower performance figures compared to ES 2.0. Similarily, directional realtime lightmaps require more texture units than is guaranteed to be available in ES 2.0, so they are disabled there.

When writing ShaderLab shaders, the following new #pragma target enums are recognized:

  • #pragma target es3.0  // Requires OpenGL ES 3.0, desktop OpenGL 3.x or DX Shader Model 4.0, sets SHADER_TARGET define to 35
  • #pragma target es3.1  // Requires OpenGL ES 3.1, desktop OpenGL 4.x (with compute shaders) or DX Shader Model 5.0. Sets SHADER_TARGET define to 45

When using the existing #pragma targets, they map to following GL levels:

  • #pragma target 40 // Requires OpenGL ES 3.1 or desktop OpenGL 3.x or DX Shader Model 4.0
  • #pragma target 50 // Requires OpenGL ES 3.1 + Android Extension Pack, desktop OpenGL >= 4.2 or DX Shader Model 5.0

For including and excluding shader platforms from using a specific shaders, the following #pragma only_renderers / exclude_renderers targets can be used:

  • #pragma only_renderers gles  // As before: Only compile this shader for GL ES 2.0. NOTE: ES 3.0 and later versions will not be able to load this shader at all!
  • #pragma only_renderers gles3  // Only compile for OpenGL ES 3.x. NOTE: All ES levels starting from OpenGL ES 3.0 will use the same shader target. Shaders using AEP features, for example, will simply be marked as unsupported on OpenGL ES 3.0 hardware
  • #pragma only_renderers glcore // Only compile for the desktop GL. Like the ES 3 target, this also scales up to contain all desktop GL versions, where basic shaders will support GL 2.x while shaders requiring SM5.0 features require OpenGL 4.2+.

Future development

As described above, a common GL codebase allows us to finally bring more features to the OpenGL / ES renderer. Here are some things we’ll be working on next (no promises, schedule- or otherwise, your mileage may vary, please talk with your physician before use, and all the other usual disclaimers apply):

  • Finalise desktop GL, deprecate the legacy GL renderer and use this as the new default.
  • Deprecate the old “GL ES 2.0 graphics emulation” mode in the editor (it basically just clamps the DX renderer to Shader Model 2.0) and replace it with actually using the ES shaders and rendering backend.
  • More accurate target device emulation: Once we can run the ES shaders in the editor directly, we can finally do more accurate target device emulation. Using the caps system, we’d generate a database of GL caps for lots of Android/iOS devices, containing each supported GL extension, supported texture formats etc and apply them to the editor renderer directly. This way the developer could see (approximately) what the scene should look like on any given device (apart from differences in GPU-specific bugs, shader precisions etc).

Diff math

This blog post introduces a few hints about what you can expect when you diff changesets (commits) in your version control. Something similar to what we wrote to explain the difference between 2-way and 3-way merge months ago.

The diff function

Diff (9) actually means “diff with previous” or Diff(8, 9). We’ll assume the Diff function to be Diff(src, dst).

May 25

Many Levels of Rejection

Submitting apps to the App Store is filled with many wonderful opportunities to be rejected. Let’s count them!

1. Compiling/Building your app is the first possible level of rejection. It’s usually your fault, but some days…

2. Signing your app is also an adventure in rejection with the added joy of creating multitudes of profiles and app IDs than you really don’t know what to do with but are too afraid to delete.

3. Sometimes the phone itself will reject you next. Maybe Springboard is having a bad day, or maybe you really have made a mess of those profiles…

4. Hey look at me! The watch wants in on this game too! It likes to reject you for a variety of reasons but doesn’t like to tell you which. You’ll have to dig into the logs to find its secret motives.

5. Time to submit that puppy and get rejected by the iTunes Connect! iTunes is actually pretty good at this whole rejection thing and does its best at helping you through the difficult times.

6. Well now that you’re uploaded, surely the app… whoops. Nope. Time for the little Prerelease Binaries to reject you. Oh you didn’t know about that esoteric requirement? You read every guide, right? Right?

7. Time to submit for review and let the humans… nope, wrong again. Another computer can reject you now before a human ever sees it. Watch your inbox cause iTunes Connect has no idea what that computer is doing.

8-1,000. Finally after all that, you can be rejected by a human. This rejection process is long, filled with unspoken truths, false assumptions, and bitter quibbles. But at least it’s a human…

1,001-1,024. It was all worth it, your app is in the store and is running gr… oh, it crashes on iPad 2s when you rotate the screen during the 5th moon of the year. 

So close.

Agnostic Cloud Management

Hi I am Karsten, I have been working behind the scenes of Unity since 2011, as an IT Manager, to support our IT infrastructure.

Background

IT at Unity does many things behind the scenes, from ordering hardware to operating services, both for our own internal usage and for our customers. I tend to say that our finest role is to make sure that everybody that relies on the services we provide is able to do their job.

Modern conveniences that you take for granted, like for example getting an Uber, require a lot of reliable IT infrastructure. We put together this blog post to give you some insight into the basic principles and tools we use to build the backbone of Unity’s IT.

During the years of building and maintaining the IT infrastructure of Unity we have tried to live by 4 simple guidelines.

  1. Use Open Source where possible.
  2. Design by KISS (Keep It Simple, Stupid) principle.
  3. No Single Point Of Failure – NSPOF.
  4. If anything can be done better or is not working optimal within the infrastructure, address it and fix it, even though we just built it.

What I am going to write about today is one of the building blocks we use for our Infrastructure, which is OpenNebula. OpenNebula is a cloud management tool that supports a variety of different virtualization technologies including Xen, KVM, Vmware and have hybridcloud functionality to Softlayer Cloud, Amazon EC2, and Azure. This enables us to combine bare-metal servers with public clouds, so we can build our services without the risk of running out of resources. Furthermore, this provides us the flexibility to use the technology that fits the service best and we have a single API to use to leverage the tech.

When I first joined Unity, we used a very traditional virtualization strategy. Create VM’s as needed and often only one with that purpose. That worked out well for a while however at some point, our old setup did not scale and we wanted to find a better way to manage the complete environment, from creating disks to deploying VM’s.

What next?

We started looking for tools that supported our needs and our guidelines. We came up with a list of products that we evaluated on a high level. It quickly became apparent to us that the only real choice we had was OpenNebula.

The main wining points was:

  • OpenNebula uses known technology to manage the cloud, Linux, KVM, libvirt etc.
  • OpenNebula uses standard virtualization tools, so we did not have to learn new complicated tech.
  • We can manage the complete cloud environment without OpenNebula, because it uses default virtualization tools. So if for some reason OpenNebula were to stop working we would still be able to manage, migrate, etc. existing VM’s with standard tools like libvirt.
  • We can manage our virtual environment as well as Amazon EC2 and Softlayer Cloud the same way, through OpenNebula.

ucloud1

Third try is the charm

We then started to migrate our hosting to OpenNebula and have had 3 different clouds managed this way. The first time we used an older version of OpenNebula and a big EMC SAN for storage. We experienced challenges with this setup and we realized after some time that our GFS2 cluster was not the best choice to store images on. The second time we used OpenNebula 4.0 and we replaced GFS2 with Ceph. This provided more flexibility but we had to ‘hack’ parts of OpenNebula for it to support Ceph cloning / CoW. The third try was an iteration over the second setup with a more matured (non-hacked!) OpenNebula. Throughout all setups we have always had a clear vision to embrace a hybridcloud setup with a public and private facing part

Today

The evolution of IT is moving very fast and some of the functionality that was not in OpenNebula back when we first deployed it is now available. Things like Virtual Data Centers so we do not need to have 3 independent clouds running but can run them all in a federated environment. As you all know Unity is moving fast and this in turn requires that the IT infrastructure is evolving at the same pace to keep up with business.

To support our growing business we just built a new cloud infrastructure. We involved OpenNebula Systems, the company behind OpenNebula, to help us finalize our design ideas and to speed up the deployment phase. We mainly used the functionality of OpenNebula, but also required some additional functionality that we funded: Ceph snapshots.

Why do we want to fund the feature:

  1. To support and give back to the open-source community behind OpenNebula.
  2. To get the required extra functionality.
  3. To make sure that it is supported upstream so that the functionality will continue to be available.

Since we are globally spanning we need a setup that supports that, so we created a cloud that is truly global. We have data centers in the US, EMEA, and ASIA regions.

ucloud2

When we grow, our model allows us to add extra data centers easily, according to our guidelines #2 and #3. Since our cloud infrastructure is build on the KISS principle, we have created the data centers to run interconnected, autonomously or anything in between.

One data center consists of the following components:

  • Compute (CPU+RAM).
  • Storage.
  • Hybrid scale-out to both Softlayer Cloud and Amazon EC2.

ucloud3

This will enable us to create auto-scaling groups that will initially use the resources on our bare-metal servers. If we then run out of local resources, we can scale out into either Softlayer Cloud or Amazon EC2.
Together with OpenNebula Systems we got all the components running in just 4 weeks. To illustrate the flexibility and that our design is working as expected we created a new data center in just 2 days. This exercise made us confident that we can continue to scale at the pace that the business requires us to do.

May 23

dupefinder - Removing duplicate files on different machines

Imagine you have an old and a new computer. You want to get rid of that old computer, but it still contains loads of files. Some of them are already on the new one, some aren’t. You want to get the ones that aren’t: those are the ones you want to copy before tossing the old machine out.

That was the problem I was faced with. Not willing to do this tedious task of comparing and merging files manually, I decided to wrote a small tool for it. Since it might be useful to others, I’ve made it open-source.

Introducing dupefinder

Here’s how it works:

  1. Use dupefinder to generate a catalog of all files on your new machine.
  2. Transfer this catalog to the old machine
  3. Use dupefinder to detect and delete any known duplicate
  4. Anything that remains on the old machine is unique and needs to be transfered to the new machine

You can get in two ways: there are pre-built binaries on Github or you may use go get:

go get github.com/rubenv/dupefinder/...

Usage should be pretty self-explanatory:

Usage: dupefinder -generate filename folder...
    Generates a catalog file at filename based on one or more folders

Usage: dupefinder -detect [-dryrun / -rm] filename folder...
    Detects duplicates using a catalog file in on one or more folders

  -detect=false: Detect duplicate files using a catalog
  -dryrun=false: Print what would be deleted
  -generate=false: Generate a catalog file
  -rm=false: Delete detected duplicates (at your own risk!)

Full source code on Github

Technical details

Dupefinder was written using Go, which is my default choice of language nowadays for these kind of tools.

There’s no doubt that you could use any language to solve this problem, but Go really shines here. The combination of lightweight-threads (goroutines) and message-passing (channels) make it possible to have clean and simple code that is extremely fast.

Internally, dupefinder looks like this:

Each of these boxes is a goroutine. There is one hashing routine per CPU core. The arrows indicate channels.

The beauty of this design is that it’s simple and efficient: the file crawler ensures that there is always work to do for the hashers, the hashers just do one small task (read a file and hash it) and there’s one small task that takes care of processing the results.

The end-result?

A multi-threaded design, with no locking misery (the channels take care of that), in what is basically one small source file.

Any language can be used to get this design, but Go makes it so simple to quickly write this in a correct and (dare I say it?) beautiful way.

And let’s not forget the simple fact that this trivially compiles to a native binary on pretty much any operationg system that exists. Highly performant cross-platform code with no headaches, in no time.

The distinct lack of bells and whistles makes Go a bit of an odd duck among modern programming languages. But that’s a good thing. It takes some time to wrap your head around the language, but it’s a truly refreshing experience once you do. If you haven’t done so, I highly recommend playing around with Go.

Random questions


Comments | @rubenv on Twitter

May 22

Case Study: Development Time Slashed by 50% for Leading Transport Company

mrw-logoMRW is Spain’s leading national and international express transport company. Powered by 10,000 people linked to the brand in over 1,300 franchises and 64 logistical platforms in Spain, Portugal, Andorra, Gibraltar, and Venezuela, MRW handles an average of 40 million parcel deliveries per year and ships to more than 200 countries and over 10,000 online stores.
 
A mission critical element of the company’s success is the MRWMobile app that supports 2,500 concurrent users in the field by helping them with process optimization, including delivery coordination. MRWMobile was developed by the company’s Portugal-based partner Moving2u, and after the successful creation of MRWMobile 3 for Windows, MRW wanted to expand to Android.
 
MRW app on HTC OneThe app is used in the field for a range of functions, including proof of picking up deliveries in real time, receiving new work orders, and for rescheduling order pick ups and deliveries—all while using secure communications and local data encryption. To support these functions, the app needs to support a range of capabilities, including offline work, local storage, push sync, multi-threading, barcode scanning, photos, and signature capture. The app also incorporates geolocation, multilingual support, multiple user profiles, mobile payment, printing, document scanning, and internal communications with messages and tasks.
 
The magnitude of requirements coupled with budget and conflicting project roadblocks created time-to-market challenges. “Without Xamarin, it would have taken at least twice as long to have the full feature set of the app built and tested,” says Alberto Silva, R&D Manager at Moving2u.
 
“Xamarin is the right approach for any serious Android, iOS, or mobile cross-platform app development,” Alberto adds. “Even if you don’t plan to go cross-platform, the productivity of Xamarin in producing an app for a single platform in C# is unmatched.”
 

View the Case Study
 

The post Case Study: Development Time Slashed by 50% for Leading Transport Company appeared first on Xamarin Blog.

May 21

RSVP for Xamarin’s WWDC 2015 Party

Join the Xamarin team for a party celebrating WWDC at Roe Restaurant on Tuesday, June 9th, from 6:00 – 9:00pm. Just two blocks from Moscone you’ll find great conversation with your fellow mobile developers, drinks, and appetizers. We’d love for you to join us to talk about your apps and projects and the latest news from Apple.

WWDC 2015 Logo

When: Tuesday, June 9th, 6pm-9pm
Where: Roe Restaurant, 651 Howard St, San Francisco, CA, 94105

RSVP

Even if you’re not attending WWDC, all of our Bay Area friends are welcome!

You can make the most of your time in town for WWDC week by scheduling dedicated time with a member of our team.

We hope to see you there!

The post RSVP for Xamarin’s WWDC 2015 Party appeared first on Xamarin Blog.

Xsolla Unity SDK – a customizable in-game store for desktop, web and mobile

We’re excited to announce a new service partner on the Asset Store: Xsolla!

For almost a decade, Xsolla has been providing payment services to some of the biggest names in the game industry: Valve, Twitch, Ubisoft and Kongregate, to name only a few. With the introduction of the new Unity Asset Store product, the Xsolla solution is now available to Unity developers everywhere!

Global payment solution, monitoring and marketing in one

The Xsolla Unity SDK is transparent and customizable; it supports more than 700 payment options from all over the world and comes with a sophisticated monitoring system and marketing tools. Plus, all transactions are protected by a robust anti-fraud solution.

1,2,3 Go! You’re up and running across your platforms

Thanks to straightforward documentation and extensive support, you can integrate the Xsolla solution in a matter of hours. Use it across Web, desktop and mobile!

“The Xsolla Plugin allows you to seamlessly integrate a fully functional virtual store right in your game. It’s super easy. With Unity 5, developers can build bigger and more advanced online products. It’s a great time to explore the multiplatform possibilities of Unity and expand products beyond smartphones and tablets to PCs and the browser-based market. You can get a high quality reliable in-app store running in a browser window with Unity 5 and the Xsolla Plugin. It’s never been easier”.

Alexander Agapitov, CEO & Founder, Xsolla

The Xsolla Plugin:

  • A reliable solution from a trusted provider
  • Complete in-game store management toolset to manage items, virtual currency, and subscription billing
  • Integrated tools for promotion of your in-game products
  • Advanced reporting and analytics
  • 24/7 anti-fraud protection and customer support
  • Automatic localization of UI, payment methods and currencies
  • Full multiplatform support across desktop, web and mobile

May 20

Get Started with HomeKit

Next month sees the launch of several highly anticipated HomeKit accessories, which were debuted earlier this year at CES. With HomeKit-enabled accessories finally coming to market, it’s time to create iOS apps that utilize Apple’s home automation APIs.

homekit

HomeKit is designed to bring an end to individual apps for smart home accessories. Gone are the days where you will be switching between apps to set up a perfect movie night scene; instead you’ll have one app, which communicates to all of your accessories using HomeKit.

HomeKit has a couple of core concepts that form the basis for the entire API. The general idea of HomeKit is that you interact with a Home, which has Rooms, which have Acessories, which have states. In order to get started, you’ll need to create a Home Manager. The Home Manager is your entry point to HomeKit – it keeps a common database of Accessories and allows you to manage your Home(s). It also notifies you of changes, which makes it easy to deal with changes to the HomeKit configuration from other HomeKit-enabled apps.

General Setup Tips

If you’re looking to test these APIs, it’s worth noting that you’ll need access to a physical iOS device running iOS 8 at a minimum. HomeKit doesn’t currently work within the iOS Simulator and the exception thrown doesn’t hint towards this. Because you’re running on the device, you’ll need to make sure you’ve set the entitlements for the project to allow for HomeKit. You’ll probably also want to grab a copy of Apple’s Hardware IO Tools for Xcode. The Hardware IO Tools for Xcode allow you to simulate HomeKit-enabled devices for testing your app. You can fetch this from the Apple Developer Center if you’re an existing member.

Creating a Home

To create a Home, we must first create an instance of the Home Manager.

var homeManager = new HomeKit.HMHomeManager();

Once we’ve done this, we can go ahead and add a Home to the homeManager object.

homeManager.AddHome("Guildford", (HomeKit.HMHome home, NSError error) =>
{
    if (error != null)
    {
        // Adding the home failed. Check the error object for why!           
    }
    else
    {
        // Successfully added home!
    }
});

All Homes within your HomeKit configuration must have a unique name, so if we have two homes in the same city, it might be worth finding another naming convention. Homes must be uniquely named because we will be able to interact with them using Siri once Apple fully enables HomeKit (hopefully later this year). For example, we’ll be able to say, “Hey Siri, turn my lights off in Guildford,” and like magic, all of the lights in your Home in Guildford will be switched off.

Once you’ve added a Home, the DidUpdateHomes event will be raised. This allows other apps to ensure they’ve processed any new Homes that have been added to the database. We can subscribe to the event with the following API.

homeManager.DidUpdateHomes += (sender, args) =>
{     
	foreach (var home in homeManager.Homes)     
	{         
	    var alert = new UIAlertView("Home...", home.Name, null, "OK");  
	}
};

Creating a Room

A Home also contains Rooms, each of which has a list of Accessories that are unique to that particular Room. Much like the Home, a Room can notify you about any changes and must also be uniquely named. This again allows you to interact with the Room using Siri. The API for creating a Room is almost identical to creating a Home.

home.AddRoom("Kitchen", (HMRoom room, NSError error) =>
{     
	if (error != null)     
	{         
	    //unable to add room. Check error for why     
	}     
	else     
	{         
	    //Success     
	}
});

Accessories

Accessories are where HomeKit starts to become a little more interesting. Accessories correspond to physical devices and must be assigned to a Room. They have a device state which allows you to query them. For example, you can query the intensity of a light fixture or the temperature of a thermostat. As you’ve probably already guessed, Accessories must be uniquely named, but this time only within the Home where they reside. Accessories will also notify you of changes to the state so you don’t have to constantly query them to ensure your app is up to date; one common event that you can be notified of is when the device is reachable.

accessory.DidUpdateReachability += (o, eventArgs) =>
{                         
	if (accessory.Reachable == true)                         
	{                             
	    //we can communicate with the accessory                         
	}                         
	else                         
	{                             
	    //the accessory is out of range, turned off, etc                    
	}                     
};

A few of the more interesting aspects of Accessories are Services and Characteristics. A Service represents a specific piece of device functionality. For instance, Apple gives the example that a garage door accessory may have a light and a switch Service. Users wouldn’t ever create Services or Characteristics as these are supplied by the accessory manufacturer, but it’s your job as a developer to make sure they can interact with the Services.

Action Sets and Triggers

Actions are by far my favorite feature of HomeKit. Actions and triggers allow you to control multiple Accessories at once. For example, when I go to bed I like to turn the lights off and turn my fan on. I can program this action with HomeKit to set the state of the Accessories and then use triggers to call the action. I personally have an iBeacon stuck to the underside of my nightstand which could detect my proximity and then call my action set for sleeping. As with almost every aspect of HomeKit, each action set has a unique name within the Home that can be recognized by Siri.

Conclusion

I’m extremely excited about the prospect of HomeKit evolving into the go-to solution for home automation. With HomeKit-enabled accessories finally coming to market, there’s never been a better time to create an iOS app that utilizes Apple’s home automation APIs.

To start integrating HomeKit into your apps today, check out our HomeKitIntro sample, which will give you everything you need to build amazing home automation apps with HomeKit.

The post Get Started with HomeKit appeared first on Xamarin Blog.

IL2CPP Internals – Debugging tips for generated code

This is the third blog post in the IL2CPP Internals series. In this post, we will explore some tips which make debugging C++ code generated by IL2CPP a little bit easier. We will see how to set breakpoints, view the content of strings and user defined types and determine where exceptions occur.

As we get into this, consider that we are debugging generated C++ code created from .NET IL code. So debugging it will likely not be the most pleasant experience. However, with a few of these tips, it is possible to gain meaningful insight into how the code for a Unity project executes on the actual target device (we’ll talk a little bit about debugging managed code at the end of the post).

Also, be prepared for the generated code in your project to differ from this code. With each new version of Unity, we are looking for ways to make the generated code better, faster and smaller.

The setup

For this post, I’m using Unity 5.0.1p3 on OSX. I’ll use the same example project as in the post about generated code, but this time I’ll build for the iOS target using the IL2CPP scripting backend. As I did in the previous post, I’ll build with the “Development Player” option selected, so that il2cpp.exe will generate C++ code with type and method names based on the names in the IL code.

After Unity is finished generating the Xcode project, I can open it in Xcode (I have version 6.3.1, but any recent version should work), choose my target device (an iPad Mini 3, but any iOS device should work) and build the project in Xcode.

Setting breakpoints

Before running the project, I’ll first set a breakpoint at the top of the Start method in the HelloWorld class. As we saw in the previous post, the name of this method in the generated C++ code is HelloWorld_Start_m3. We can use Cmd+Shift+O and start typing the name of this method to find in in Xcode, then set a breakpoint in it.

image05

We can also choose Debug > Breakpoints > Create Symbolic Breakpoint in XCode, and set it to break at this method.

image02

Now when I run the Xcode project, I immediately see it break at the start of the method.

We can set breakpoints on other methods in the generated code like this if we know the name of the method. We can also set breakpoints in Xcode at a specific line in one of the generated code files. In fact, all of the generated files are part of the Xcode project. You will find them in the Project Navigator in the Classes/Native directory.

image03

Viewing strings

There are two ways to view the representation of an IL2CPP string in Xcode. We can view the memory of a string directly, or we can call one of the string utilities in libil2cpp to convert the string to a std::string, which Xcode can display. Let’s look at the value of the string named _stringLiteral1 (spoiler alert: its contents are “Hello, IL2CPP!”).

In the generated code with Ctags built (or using Cmd+Ctrl+J in Xcode), we can jump to the definition of _stringLiteral1 and see that its type is Il2CppString_14:

struct Il2CppString_14
{
  Il2CppDataSegmentString header;
  int32_t length;
  uint16_t chars[15];
};

In fact, all strings in IL2CPP are represented like this. You can find the definition of Il2CppString in the object-internals.h header file. These strings include the standard header part of any managed type in IL2CPP, Il2CppObject (which is accessed via the Il2CppDataSegmentString typedef), followed by a four byte length, then an array of two bytes characters. Strings defined at compile time, like _stringLiteral1 end up with a fixed-length chars array, whereas strings created at runtime have an allocated array. The characters in the string are encoded as UTF-16.

If we add _stringLiteral1 to the watch window in Xcode, we can select the View Memory of “_stringLiteral1” option to see the layout of the string in memory.

image06

Then in the memory viewer, we can see this:

image00

The header member of the string is 16 bytes, so after we skip past that, we can see that the four bytes for the size have a value of 0x000E (14). The next byte after the length is the first character of the string, 0x0048 (‘H’). Since each character is two bytes wide, but in this string all of the characters fit in only one byte, Xcode displays them on the right with dots in between each character. Still, the content of the string is clearly visible. This method of viewing string does work, but it is a bit difficult for more complex strings.

We can also view the content of a string from the lldb prompt in Xcode. The utils/StringUtils.h header gives us the interface for some string utilities in libil2cpp that we can use. Specifically, let’s call the Utf16ToUtf8 method from the lldb prompt. Its interface looks like this:

static std::string Utf16ToUtf8 (const uint16_t* utf16String);

We can pass the chars member of the C++ structure to this method, and it will return a UTF-8 encoded std::string. Then, at the lldb prompt, if we use the p command, we can print the content of the string.

(lldb) p il2cpp::utils::StringUtils::Utf16ToUtf8(_stringLiteral1.chars)
(std::__1::string) $1 = "Hello, IL2CPP!"

Viewing user defined types

We can also view the contents of a user defined type. In the simple script code in this project, we have created a C# type named Important with a field named InstanceIdentifier. If I set a breakpoint just after we create the second instance of the Important type in the script, I can see that the generated code has set InstanceIdentifier to a value of 1, as expected.

image09

So viewing the contents of user defined types in generated code is done that same way as you normally would in C++ code in Xcode.

Breaking on exceptions in generated code

Often I find myself debugging generated code to try to track down the cause of a bug. In many cases these bugs are manifested as managed exceptions. As we discussed in the last post, IL2CPP uses C++ exceptions to implement managed exceptions, so we can break when a managed exception occurs in Xcode in a few ways.

The easiest way to break when a managed exception is thrown is to set a breakpoint on the il2cpp_codegen_raise_exception function, which is used by il2cpp.exe any place where a managed exception is explicitly thrown.

image08

If I then let the project run, Xcode will break when the code in Start throws an InvalidOperationException exception. This is a place where viewing string content can be very useful. If I dig into the members of the ex argument, I can see that it has a ___message_2 member, which is a string representing the message of the exception.

image07

With a little bit of fiddling, we can print the value of this string and see what the problem is:

(lldb) p il2cpp::utils::StringUtils::Utf16ToUtf8(&ex->___message_2->___start_char_1)
(std::__1::string) $88 = "Don't panic"

Note that the string here has the same layout as above, but the names of the generated fields are slightly different. The chars field is named ___start_char_1 and its type is uint16_t, not uint16_t[]. It is still the first character of an array though, so we can pass its address to the conversion function, and we find that the message in this exception is rather comforting.

But not all managed exceptions are explicitly thrown by generated code. The libil2cpp runtime code will throw managed exceptions in some cases, and it does not call il2cpp_codegen_raise_exception to do so. How can we catch these exceptions?

If we use Debug > Breakpoints > Create Exception Breakpoint in Xcode, then edit the breakpoint, we can choose C++ exceptions and break when an exception of type Il2CppExceptionWrapper is thrown. Since this C++ type is used to wrap all managed exceptions, it will allow us to catch all managed exceptions.

image10

Let’s prove this works by adding the following two lines of code to the top of the Start method in our script:

Important boom = null;
Debug.Log(boom.InstanceIdentifier);

The second line here will cause a NullReferenceException to be thrown. If we run this code in Xcode with the exception breakpoint set, we’ll see that Xcode will indeed break when the exception is thrown. However, the breakpoint is in code in libil2cpp, so all we see is assembly code. If we take a look at the call stack, we can see that we need to move up a few frames to the NullCheck method, which is injected by il2cpp.exe into the generated code.

image01

From there, we can move back up one more frame, and see that our instance of the Important type does indeed have a value of NULL.

image04

Conclusion

After discussing a few tips for debugging generated code, I hope that you have a better understanding about how to track down possible problems using the C++ code generated by IL2CPP. I encourage you to investigate the layout of other types used by IL2CPP to learn more about how to debug the generated code.

Where is the IL2CPP managed code debugger though? Shouldn’t we be able to debug managed code running via the IL2CPP scripting backend on a device? In fact, this is possible. We have an internal, alpha-quality managed code debugger for IL2CPP now. It’s not ready for release yet, but it is on our roadmap, so stay tuned.

The next post in this series will investigate the different ways the IL2CPP scripting backend implements various types of method invocations present in managed code. We will look at the runtime cost of each type of method invocation.

May 19

A report from our Unite conferences in Asia

For those that know about Unite conferences, we thought we would share the marathon tour of the Asia Unites done this April.  This was a 5-city tour consisting of Tokyo, Seoul, Beijing and then splitting up between Taipei and Bangkok. From the R&D developer perspective we get valuable time meeting up with our Asian colleagues and also getting great interactions and learnings from our users in Asia. So, here’s a photo gallery offering of the trip and the conference as we saw it.

Preparations

Leading up to the beginning of the marathon, a number of us headed to Tokyo to acclimate and prepare for the talks. In addition, the User Experience team made studio visits to gather usability information across our customers in Tokyo.

Unity devs from Copenhagen and evangelists prepping the keynotes at our Tokyo office. I've learned that Japan has the cutest pink cranes with hearts in the windows.

Unite Tokyo

The Unite Tokyo show was a great success with a keynote feature Palmer Luckey of Oculus and Ryan Payton of Camouflaj. Along with an Oculus demo, and a Republique level featuring Unity-chan, we also announced our 3DS support.

We also got to see our now anime-stylized Dr. Charles Francis. He cuts quite the dashing figure.

image01 The offered sticker set in Tokyo Shinobu, Hiroki, David, Palmer, and Ryan Watching the Unity videos Unity-chan was added to a Republique level for the keynote. Various Unite Tokyo decor Unity-chan! The crowds waiting to get into talks. Hiroki and Alex giving the roadmap session, emceeing the questions Rene and Kim answering questions post talk Celebration of 3DS announcement and wrap up of Unite.

The User Experience team supplemented their session “Getting to Know You!” with card sorting exercises and user interviews.

The UX team had a crowd post-talk to be interviewed. Card Sorting was a very popular UX exercise. Card sorting in action The UX team even received some Unity-chan fan art (it’s a whole booklet).

Unite Seoul

Seoul impressed us with an amazing venue and even more attendees (>2000). We had our second pass on our talks, and had another crowd of excited and motivated developers. We’ve never seen more excited folk to see David Helagason, and the autograph signing photo below shows it!

David drawing some Pro license winner Banner across the lobby Unite Banners lining the hall in Seoul Tim Cooper on the schedule Rene Damm giving a talk regarding performance David Helgason is popular for autographs in Seoul Of course, we had some Korean BBQ. The whole Unite Seoul crew

Unite Beijing

In China, the Unite Beijing impressed with volume. There were more than 5000 attendees to the keynote.  During the keynote,  Taiwanese director/new media artist, Hsin-Chien Huang, showcased his masterpiece — “The Inheritance” performing his project first time in Mainland China. It offered an experience of the collision and mix of new media art and Unity.

As for the tech talks, they were packed as you can see from the picture of the “Fast UI Best Practices” talk picture below. Similarly, Jesper’s talk on Global Illumination had a rapt audience.

The Keynote hall and the backdrop for the keynote. Tim gives you a sense of size. Keynote in progress 20150419_Unity_204_2 Elements of "The Inheritance" #MadeWithUnity Elements of "The Inheritance" #MadeWithUnity 20150419_Unity_096_2 The booth area. Alex and Jesper managed to catch one of the pair of Dr. Charles Francis walking the floor Roadmap talk. Devs got to be up front and center. UI Talk was packed with people standing in aisles. Waiting for the team photo Our evangelists Carl and Kelvin lead the charge. The full China staff and visiting Unity folk

Unite Taipei

After Beijing, the dev team split up to half attend Taipei and half attend Bangkok. The Shanghai office with a number of folk from Taiwan carried the show through. The vibe from the attendees was great with lots of advanced discussions and questions.

John Goodale giving the keynote Kelvin during the keynote Ryan Payton talking about Republique Remastered Prepping a hot air balloon offering to the gods.

Unite Bangkok

The Unity crew from Singapore put together a great first Unite for Bangkok. The training day preceding the event was well attended and a success. Overall, Bangkok was younger and a more novice crowd compared to all the others, but it really reveals itself to be an emerging area with lots of future potential.

Carl Callewaert with a well attended training day Evan Spytma giving our Bangkok keynote Vijay during the 2D talk The Singapore team set up a great starting Unite in Bangkok

Big thanks to all the volunteers, partners, organizers and attendees!

We look forward to seeing you at the next Unite!

Xamarins on Film: New Video Resources

The Xamarin team is popping up everywhere; from conferences and user groups to Xamarin Dev Days, odds are high that you can find a member of our team at an event near you. If, however, you we haven’t made it to your neck of the woods, observing a Xamarin on film can be just as fascinating and educational. For your viewing pleasure, and to teach you about a wide variety of mobile C# topics, we present footage from some recent sightings below.

Building Multi-Device Apps with Xamarin and Office 365 APIs

Have you been curious about how to integrate your Xamarin apps with Azure Active Directory and utilize the brand new Office 365 APIs? Look no further than James Montemagno’s session at this years Microsoft Build conference on how to integrate all of these services from a shared C# business logic backend.

Cross-Platform App Development with .NET, C#, and Xamarin

Xamarin Developer Evangelist Mike James recently spoke at the Technical Summit in Berlin, providing a complete overview of how to build native cross-platform apps with C# and Xamarin.

Tendulkar Explains

If you’re just getting started, you can learn the basics of Xamarin and mobile app development one step at a time by following along with Xamarin Developer Evangelist Mayur Tendulkar in his new, ongoing series, tendulkar-uvāca (Tendulkar Explains). The first episode, below, covers how to set up your development environment.

Developing Cross-Platform 2D Games in C# with CocosSharp

13cc9fce-3d69-4b8e-8bc5-35580ff98e33 If you haven’t been following James Montemagno’s appearances on Visual Studio Toolbox, then you’re in for a treat! Officially setting the record for most appearances, his latest visit takes a look at cross-platform 2D games with CocosSharp.

Real-Time Monitoring of Mobile Apps with Xamarin Insights

13cc9fce-3d69-4b8e-8bc5-35580ff98e33In his 8th appearance on Visual Studio Toolbox, James joins Robert Green to discuss monitoring your apps in real time with Xamarin Insights.

Live Events

If you’d like to catch a Xamarin talk in person, check out our upcoming events here.

The post Xamarins on Film: New Video Resources appeared first on Xamarin Blog.

May 18

Join Xamarin at Twilio Signal

Join Xamarin at Twilio Signal, a developer conference in San Francisco, CA on May 19-20, 2015 covering communications, composability, iOS and Android, WebRTC, and much more. Key members of the Xamarin team will be available to answer your questions, discuss your apps and projects, and show you what’s new across our products.

Twilio Signal Conference Logo

Xamarin Developer Evangelist James Montemagno will also be be presenting C# and Twilio-powered iOS and Android experiences on Wednesday, May 20 at 1:45 pm, covering how to leverage the Twilio Mobile Client native SDK for iOS and Android from C# to create a rich communication experience in your mobile apps.

We’ll be in the Community Hall, so stop by with your questions or just to say hello. If you’re not already registered, limited tickets remain, and you can use promo code “Xamaringuest” for 20% off registration. We look forward to seeing you there!

The post Join Xamarin at Twilio Signal appeared first on Xamarin Blog.

Traveling the world with the Asset Store

Terry Drever is traveling the world, and he funds his here-today-gone-in-a-few-months lifestyle exclusively by selling a portfolio of assets on the Asset Store. It gives him the freedom to work from anywhere with an Internet connection, to make enough money to cover his living costs and travel expenses, and the time to work on game ideas.

When I called Terry to interview him, the first thing he did was fetch a jumper. It was snowing outside, and this was a problem, because Terry had planned and packed for South-East-Asian sunshine. The trip to Sapporo, Japan, where he’s staying in an apartment he found through Air BnB, was something of an impulse decision.

His three month stop off in Japan is part of a tour around Asia that’s already taken in Hong Kong, Mainland China and Thailand. When his visa runs out, he’ll move on. Next stop Korea, and after that… wherever the fancy takes him. Terry, who originally comes from a remote Scottish island, isn’t planning to go home anytime soon.

Terry’s been working in the game industry for seven years, and he has a strong background in and passion for game programming. Two years ago, he started making tools and publishing them on the Asset Store.

“I had a large amount of experience at that point, and I knew what games companies wanted and what they needed.”

What Terry does is fill gaps. He’s always prototyping and trying out game ideas, and he uses Asset Store tools to build them. When the tools available on the Asset Store don’t deliver the functionality he needs to make a game, he makes a tool himself and publishes it as an extension on the Asset Store.

Over the course of the interview, it becomes apparent that Terry is a bit of a perfectionist. He’s worked on a number of game prototypes but hasn’t published them because they’re just not quite good enough.

Often, it’s the artwork that’s a problem. Though he has a stable prototype with game mechanics he’s happy with, the look and feel of the game often don’t meet his expectations.

Currently, Terry is talking to a number of Asset Store publishers to source artwork for his online multiplayer deathmatch game. It’s a game he’s always wanted to make, and one that will generate another Asset Store extension which he’s planning to publish in a couple of months.

His popular uSequencer cutscene tool resulted from work he did on another as yet unpublished title: A rhythm game for mobile inspired by Japanese games he used to play, in which the player’s action and resultant reward are tied to a sequence of game events. uSequencer more or less provides the game’s core architecture.

In the coming weeks, Terry will be visiting a game studio in Tokyo to see how they use uSequencer. He finds it fascinating discovering how the tools he’s made are used in practice, and all those insights are useful when it comes to maintaining and developing his tools.

Indeed, a new version of uSequencer is in the works. Terry’s considering a name change and is working to present his asset more professionally on the Asset Store using services from fiverr.com, because, yet again… he’s not satisfied with the visuals.

Terry’s recipe for publisher success:

  • Tie asset development to game development
  • Develop to fill a need, and find gaps in the market
  • Think carefully about how you name your product
  • Make sure your asset is presented in a professional manner

Best of luck Terry!

May 15

VR pioneers Owlchemy Labs

Owlchemy Labs work at the frontier of VR development. Which, you could say, puts them at the frontier of the frontier of game development. And, they like to boldly go. Studio CTO Devin Reimer enthuses about working with VR as a once-in-a-lifetime chance to shape a medium that’s going to be hugely influential.

Owlchemy Labs have been using Unity since formation in 2010, and to date they’ve made 10 different games. One of these is a WebGL version of alphabetical-list-sure-fire-winner AaaaaAAaaaAAAaaAAAAaAAAAA!!! for the Awesome. Developed in cooperation with Dejobaan Games, it was the first commercially available WebGL title made with Unity.

Adapting AaaaaAAaaaAAAaaAAAAaAAAAA!!! for the Awesome for another new platform (Oculus Rift) and releasing it to Steam opened a further door for the company. In November 2014, Owlchemy Labs were approached by Valve to develop a game that would show off the capabilities of what was then an unannounced platform: SteamVR.

A mountain of NDAs later, and Devin and Studio Founder Alex Schwartz were hard at work on what became Job Simulator; a game which stole lots of hearts when showcased on the HTC Vive at GDC.

“I never expected a video game demo in which I grabbed a tomato (and threw it at a robot) to awe me so deeply. I … wanna play Job Simulator forever” IGN

kitchen02

Playtesting is key

Developing a playable prototype from scratch in a three-month timeframe without the luxury of a polished and tested pipeline to SteamVR, meant that the Owlchemy team had to iterate very fast to get Job Simulator ready on time. Indeed, both Alex and Devin make a point of stressing that rapid and early playtesting are key to VR development generally.

“Oculus has a best practice guide for making VR content that they’re constantly updating and changing. No-one really knows at this stage what will work in VR without playtesting. You simply have to experiment and fail quickly. If, for example, the player in your game is a 50-storey Godzilla wandering around Manhattan, it’s best to prototype that mechanic and get a feeling for what playing it is actually like before you push forward to develop your game,” says Devin.

He recalls how, when developing Job Simulator, he worked alongside a colleague adjusting the size of the game’s microwave. With someone wearing the Oculus Rift headset calling out with feedback, Devin could scale it in realtime from the Unity editor and know that it looked and felt right inside the device: “You just don’t get a proper sense of the size of an object as the user experiences it from a conventional 2d monitor.”

vr_seated_marked

Optimize, optimize, optimize

With up to 5 million pixels being rendered 90 times per second, both Devin and Alex are keen to stress the importance of optimization. Alex likens it to making games for the PS2-era, and generally the studio’s long history of developing games for mobile has prepped them for the unique challenges of VR.

“Understanding how to keep your draw calls to a minimum and your shading simple are really important when you’re developing for VR,” says Devin.

VR for the future

Both Devin and Alex see VR as having the potential to redefine not just gaming, but industries, from remote surgery to architectural visualization and beyond.

Indeed, having seen Devin’s grandmother (whose gaming experience is limited to say the least) pick up the HTC Vive headset and immediately and seamlessly interact within the world of Job Simulator, they’re confident that VR headsets will soon be a standard item of consumer electronics.

“I get asked the question, why VR? Why take the risk on such an unproven platform? And my feeling is that if we spent our time developing another me-too mobile title, then we’d be putting the studio at greater risk. By being amongst the first movers on a new platform that we truly believe in, we’re securing the future of our business. We’re in it for the long game with VR,” says Alex.

Best of luck to the Owlchemy Labs team!

Community Contributions You Won’t Want to Miss

Xamarin developers not only love building amazing mobile apps in C#, they also love helping the developer community at large. Whether through building great open-source libraries, components, and plugins or sharing their experience in forums, blog posts, and podcasts, our community consistently steps up to make development with Xamarin a pleasure. The links below will take you to some of our favorite content from our community over the past few weeks.

Podcasts

Great Community Posts

Tools & Frameworks

Xamarin.Forms

Thanks to these developers for sharing their knowledge and insight with the rest of the Xamarin community! If you have an article or blog post about developing with Xamarin that you’d like to share, please let us know by tweeting at @XamarinHQ.

The post Community Contributions You Won’t Want to Miss appeared first on Xamarin Blog.

May 14

Holographic development with Unity

In January of this year Microsoft made public their most innovative and disruptive product in quite some time called HoloLens, an augmented reality headset that combines breakthrough hardware, input and machine learning so that you can bring mixed reality experiences to life using the real world as your canvas. These are not just transparent screens placed in the center of a room with an image projected on them but truly immersive holograms that enable you to interact with the real world. This is a truly innovative product with a rich set of APIs that enable you to develop Windows Holographic applications that will blur the line between the real world and the virtual world.

As impressive as this may sound, Microsoft has been very quiet about this technology; only allowing a few videos and bits of information to be released. But at the most recent Microsoft Developer Conference//Build 2015, they allowed a select group of people, around 180 people in total including me, to try out this new technology.

Crafting 5 Star iOS Experiences With Animations

When I think about the iPhone apps I use the most, they all have one thing in common: they use custom animations to enhance the user experience. Custom animations provide an immersive experience, which can add a whole new element of enjoyment to your user experience.

iOS is jam-packed with beautiful and subtle animations, visible from the moment you unlock the phone. Subtlety is the key to delivering the best experience, as Apple is very clear that developers should avoid animations that seem excessive or gratuitous.

Original keynote design

With over 1.2 million apps on the iOS app store alone, if you want your app to get noticed, it needs to stand out from the crowd. The easiest way to do this is with a unique user interface that goes beyond the generic built-in controls and animations.

In this blog, I’m going to show you how you can easily prototype and add custom animations to your iOS apps. Before we get started on the technical details, it’s worth discussing a tip used by some of the best mobile app designers.

Prototype in Keynote or PowerPoint

It’s no secret that to create something that appears simple often requires a large amount of iteration to refine it to its simplest form. This is definitely the case when it comes to UI design and animation design. Many UX designers will use tools like Keynote or PowerPoint, which include built-in animations, for prototyping. During this part of the design process, you are free from thinking about the complexities of the implementation and can focus on the desired result. It’s a step in the design process I highly recommend to everyone who is creating custom animations and transitions. Below is an animation movie exported from Keynote, which you can use to compare to the final animation.

keynote designed animation
Implementation

Once you’ve designed your animations, you’ll need to start implementing them. Rest assured, though, as iOS has a fantastic animation API that makes the experiences very straight forward. You’ll most likely be reusing many of the animations you create across your app, which Apple actually recommends in the iOS human interface guidelines.

The implementation for your views rely on Apple’s Core Animation framework. This framework consists of a number of extremely powerful APIs that cater to different requirements. For example, you can create block, key frame, explicit, and implicit animations with Core Animation. Another option is to animate your views using UIKit, which is an approach I use a lot in Dutch Spelling.

For example, I change the position of a button as well as changing its visibility over .2 seconds.

 //Move button Asynchronously and fade out
 AnimateAsync(0.2, () =>
 {
     button.Frame = CalculatePosition(_buttonsUsedInAnswer.IndexOf(button));
     button.Alpha = 0.0f;
});

The snippet above deals with the button animation; below is a video of the animation running on a the device.

final animation
Another example from Dutch Spelling is shrinking views over time. I use this to draw attention to other visual elements within the UI.

//Shrink WordView Asynchronously and then set font.
public void ShrinkWord()
{
    var transform = CGAffineTransform.MakeIdentity();
    transform.Scale(1f, 1f);
    UIView.AnimateAsync(0.6, () =>
    {
        _word.Transform = transform;
        _title.TextColor = "3C3C3C".ToUIColor();
    });
    _word.Font = UIFont.FromName("Raleway-Regular", 32);
}

Further Reading

You can find more examples to help you get started building your own 5-star app animations in our Core Animation documentation here.

The post Crafting 5 Star iOS Experiences With Animations appeared first on Xamarin Blog.

May 13

RSVP for Xamarin’s Google I/O 2015 Party

Join the Xamarin team on May 27th at Southside Spirit House from 7-10pm to kick off Google I/O!

Google I/O

Spend the night before Google I/O with the Xamarin Team and fellow mobile developers and check out the Xamarin Test Cloud wall in person to see how easy mobile testing can be.

Xamarin Test Cloud Wall

When: Wednesday, May 27th, 7pm–10pm
Where: Southside Spirit House, 575 Howard St, San Francisco, CA, 94105

RSVP

In the Bay Area but not attending Google I/O? Stop by anyway! You and your friends are welcome. Make the most of your time at Google I/O and schedule dedicated time with the Xamarin team while you’re in town for the conference. We’d love to meet you, learn about your apps and discuss ways we can help.

The post RSVP for Xamarin’s Google I/O 2015 Party appeared first on Xamarin Blog.

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers