Quantcast
Channel: Dynamics 365 Blog
Viewing all 773 articles
Browse latest View live

PAGE.RUNMODAL actions change in Dynamics NAV 2013

$
0
0

In Dynamics NAV 2009 page.RUNMODAL can have actions OK, Cancel, LoookupOk, LookupCancel, Yes, No, Close, FormHelp, RunObject, RunSystem. Like it is described at http://msdn.microsoft.com/en-us/library/dd355151.aspx you can create code like:
IF Page.RUNMODAL(21, MyRecord, ...) = Action::Close THEN...;

But in Dynamics NAV 2013 actions list is changed to OK, Cancel, LookupOK, LookupCancel, Yes, No, RunObject, RunSystem (http://msdn.microsoft.com/en-us/library/dd355151(v=nav.70).aspx ).
As you see there is no action ‘Close’. Means if we have code based on this action this code will never be executed.

Action ‘Close’ can’t be used in NAV 2013 because with web client in browser we can’t catch what action user does: close page or close browser.

I'm writing this article because we can import objects from previous NAV versions and then can run it.

Before NAV 2013 build 34588 we could compile these objects but during run it’s showed errors.
With fix KB 2836619 compiler shows error here and now we need manually change to action we really want or even we need to change application workflow, because we have no real replacement for ‘Close’.

So be ready to rewrite your code.

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

Gedas Busniauskas
Microsoft Lithuania
Microsoft Customer Service and Support (CSS) EMEA


Creating your first Hello World control add-in for the Web client

$
0
0

In Microsoft Dynamics NAV 2013 R2, the Microsoft Dynamics NAV Web client now also supports extensibility, enabling control add-ins that display, for example, a chart or a map.

This means that you can write new control add-ins that you can use on both the Windows client and on the Web client. The following tutorial will walk you through the steps of creating a Hello World example that displays on the Web client.

To complete the following steps, you will need:

  • Microsoft Dynamics NAV 2013 R2 with a developer license. For more information, see System Requirements for Microsoft Dynamics NAV 2013 R2.
  • CRONUS International Ltd. demonstration database.
  • Microsoft Visual Studio 2008 Express, Microsoft Visual Studio 2008, Microsoft Visual Studio 2010, or Microsoft Visual Studio 2012.
  • Microsoft .NET Framework Strong Name tool (sn.exe). This is included with Visual Studio and the Windows SDK.
    • By default, the Microsoft .NET Framework Strong Name tool is located in C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\Bin\NETFX <version> Tools or in a location similar to this one, depending on what operating system you are using.
  • Experience using Visual Studio.

To create your first Hello World control add-in for the Web client

  1. In Visual Studio, create a new Visual C# project of type Class Library. Name the solution HelloWorld.
  2. Add a reference to the following assembly:
    Microsoft.Dynamics.Framework.UI.Extensibility.dll.
  3. Open the Class1.cs file and add the following lines of code:

    // Copyright ©Microsoft Corporation. All Rights Reserved.

    // This code is released under the terms of the Microsoft Public License (MS-PL, http://opensource.org/licenses/ms-pl.html.)

    using System;

    using System.Collections.Generic;

    using System.Linq;

    using System.Text;

    using System.Threading.Tasks;

    using Microsoft.Dynamics.Framework.UI.Extensibility;

     

     

    namespace HelloWorld

    {

        public class Class1

        {

            [ControlAddInExport("HelloWorld")]

            public interface iHelloWorld

            {

                [ApplicationVisible]

                event ApplicationEventHandler ControlAddInReady;

            }

        }

    }

  4. In Visual Studio, on the Project menu, choose ClassLibrary Properties, and then choose Signing. Select the Sign the Assembly check box and choose
    a strong name key file.

  5. Now build the project.

  6. Once the project is built, you must copy the output assembly file to the computer that is running the Microsoft Dynamics NAV Development Environment. You locate and copy the control add-in assembly file (.dll), which is in the control add-in project output folder. By default, this is C:\Documents\MyDocuments\Visual Studio\Projects\[Your Addin Project]\[Your Class Library]\bin\Debug.

  7. On the computer running the development environment, paste the assembly file in the Add-ins folder. By default, this is C:\Program Files (x86)\Microsoft Dynamics NAV\71\RoleTailored Client \Add-ins.

    At this point, in more complicated scenarios, you will create a manifest file and any resource files that you need to display your control add-in, such as images or style sheets. But for this HelloWorld example, we will skip right to registering the control add-in in Microsoft Dynamics NAV.

  8. To register a control add-in, you include it in the Control Add-ins page in the Microsoft Dynamics NAV client. Enter the name of the control add-in as stated in the Microsoft.Dynamics.Framework.UI.Extensibility.ControlAddInExport, which in this code example is HelloWorld.

  9. Enter the public key token, which you can determine by running the Microsoft .NET Framework Strong Name tool (sn.exe) on the assembly.

  10. Enter a descriptive text for your control add-in.
    Now only a few steps remain to include your HelloWorld control add-in on a page.

  11. From the Microsoft Dynamics NAV Development Environment, open Object Designer.

  12. Create a new page and add a field control that can hold the HelloWorld control add-in. Something very simple like this will do:

  13. Open the Properties window for the field control, locate the ControlAddIn property and select the HelloWorld control add-in from the Client Add-in lookup window. Choose the OK button.

  14. Save and compile the page. Remember the page ID for the next step.

  15. Run the new page from a web browser with the following command:
    http://MyWebServer:8080/DynamicsNAV71/WebClient/list.aspx?company=CRONUS%20International%20Ltd.&page=MyPageID

  16. You should now see something like this:

Now you probably want to try a more advanced example. Go to Help for Microsoft Dynamics NAV 2013 R2 and read through the documentation on extending Microsoft Dynamics NAV. For more information, see Walkthrough: Creating and Using a Client Control Add-in in the MSDN Library.

Best regards,

The Dynamics NAV team

Creating URLs to Microsoft Dynamics NAV Clients

$
0
0

The URL builder function, GETURL, is released in Microsoft Dynamics NAV 2013 R2 to reduce coding time for developers who need to create various URL strings to run application objects in either the win client, the web client, or on web services. In addition, the GETURL function makes multitenancy features more transparent to C/AL developers.

Details

Ever had to construct win client URLs like the one below?

dynamicsnav://myserver:7046/myInstance/myCompany/runpage?page=26

Today, Microsoft Dynamics NAV also provides a web client. This means that you must update your code to construct web client URLs too. What about multitenancy? The URL Builder should know if it is running in a multitenant setup and it should know how to choose the right tenant. What about maintaining this code?

The good news is that GETURL has been introduced to handle all URL building for you.

GETURL automatically handles:

  • Multitenancy
  • Correct URL format for each client
  • Publicly accessible hostnames.

Usage

The format is:

[String :=] GETURL(ClientType[, Company][, Object Type][, Object Id][, Record])

Where:

  • Client Type can be: Current, Default, Windows, Web, SOAP, or OData. This enables a range of scenarios for the C/AL developer, such as moving to the web client without changing code to decide where the URL should point to. This is done either by setting Client Type to Current, and just ensuring that web is used to invoke the link creation, or by setting Client Type to Default and changing its value to Web when it is ready to move to the web platform.
  • Object Type and Object ID define the type of the application object to run (Table, Page, Report, Codeunit, Query, or XMLport) and its ID.
  • Record specifies the actual data to run the URL on, such as:

Vendor.GET("Account No.");

GETURL(CLIENTTYPE:WEB,COMPANYNAME, OBJECTTYPE::Page,27,Vendor)

Note: It is currently not possible to set filters on the record that you sent as a last parameter to the GETURL function. However, it is possible to write your own code to compute and append the filter string to the URL that is created by the GETURL function.

The server name and instance are extracted automatically by GETURL and do not need to be specified by the C/AL developer. Furthermore, the multitenancy setup is transparent to the C/AL developer. No multitenancy parameters are specified when you call GETURL, because the function knows from the server setup if it is running in a multitenant environment and if so, it will add a string like "&tenant=MyTenant" to the URL.

When to Use

The GETURL function can generally be used every time a URL must be created. The following are some scenarios where the function is particularly useful.

  • Document approvals. For more information, see the “NAV Usage Example” section.
  • Reports containing drill-down links. (Beware of the resource cost of adding a new URL element to the Report dataset.)
  • When planning to write code for, or migrate to, various display targets (Microsoft Dynamics NAV Windows client, Microsoft Dynamics NAV web client, Microsoft Dynamics NAV web services) without having to explicitly specify which client to use.

Examples of Usage

The following are examples of calls to GETURL and their corresponding return value:

Command

URL

GETURL(CLIENTTYPE::Win)

dynamicsnav://MyServer:7046/DynamicsNAV71//

GETURL(CLIENTTYPE::Web)

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient

GETURL(CLIENTTYPE::OData)

http://MyServer:7048/DynamicsNAV71/OData

GETURL(CLIENTTYPE::SOAP)

http://MyServer:7047/DynamicsNAV71/WS/Services

GETURL(CLIENTTYPE::Current) ie. When running this code on a Win client session

dynamicsnav://MyServer:7046/DynamicsNAV71//

GETURL(CLIENTTYPE::Default) ie. When the Server config key DefaultClient is set to Windows

dynamicsnav://MyServer:7046/DynamicsNAV71//

GETURL(CLIENTTYPE::Windows,COMPANYNAME)

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/

GETURL(CLIENTTYPE::Windows,'')

dynamicsnav://MyServer:7046/DynamicsNAV71//

GETURL(CLIENTTYPE::Windows,'NONEXISTING Corp')

dynamicsnav://MyServer:7046/DynamicsNAV71/NONEXISTING Corp/

GETURL(CLIENTTYPE::Web,COMPANYNAME)

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient?company=CRONUS

GETURL(CLIENTTYPE::Web,'')

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient

GETURL(CLIENTTYPE::Web,'NONEXISTING Corp')

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient?company=NONEXISTING Corp

GETURL(CLIENTTYPE::OData,COMPANYNAME)

http://MyServer:7048/DynamicsNAV71/OData/Company('CRONUS')

GETURL(CLIENTTYPE::OData,'')

http://MyServer:7048/DynamicsNAV71/OData

GETURL(CLIENTTYPE::OData,'NONEXISTING Corp')

http://MyServer:7048/DynamicsNAV71/OData/Company('NONEXISTING Corp')

GETURL(CLIENTTYPE::SOAP,COMPANYNAME)

http://MyServer:7047/DynamicsNAV71/WS/CRONUS/Services

GETURL(CLIENTTYPE::SOAP,'')

http://MyServer:7047/DynamicsNAV71/WS/Services

GETURL(CLIENTTYPE::SOAP,'NONEXISTING Corp')

http://MyServer:7047/DynamicsNAV71/WS/NONEXISTING Corp/Services

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::Table,27)

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runtable?table=27

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::Page,27)

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runpage?page=27

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::Report,6)

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runreport?report=6

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::Codeunit,5065)

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runcodeunit?codeunit=5065

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::Query,9150)

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runquery?query=9150

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::XmlPort,5150)

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runxmlport?xmlport=5150

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Page,27) ie. When the Web Service is published

http://MyServer:7048/DynamicsNAV71/OData/Company('CRONUS')/PAG27Vendors

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Query,9150)  ie. When the Web Service is published

http://MyServer:7048/DynamicsNAV71/OData/Company('CRONUS')/QUE9150MyCustomers

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Page,27)  ie. When the Web Service is published

http://MyServer:7047/DynamicsNAV71/WS/CRONUS/Page/PAG27Vendors

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Codeunit,5065)  ie. When the Web Service is published

http://MyServer:7047/DynamicsNAV71/WS/CRONUS/Codeunit/COD5065EmailLogging

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::Page,27,record) List Page

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runpage?page=27&bookmark=23;FwAAAAJ7/0kAQwAxADAAMwAw

GETURL(CLIENTTYPE::Windows,COMPANYNAME,OBJECTTYPE::Page,26,record) Card Page

dynamicsnav://MyServer:7046/DynamicsNAV71/CRONUS/runpage?page=26&bookmark=23;FwAAAAJ7/0kAQwAxADAAMwAw

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::Page,27,record) List Page

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient?company=CRONUS&page=27&bookmark=23;FwAAAAJ7/0kAQwAxADAAMwAw

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::Page,26,record) Card Page

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient?company=CRONUS&page=26&bookmark=23;FwAAAAJ7/0kAQwAxADAAMwAw

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Page,27,record)

http://MyServer:7048/DynamicsNAV71/OData/Company('CRONUS')/PAG27Vendors('IC1030')

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::Page,27)

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient?company=CRONUS&page=27

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::Report,6)

https://navwebsrvr:443/DynamicsNAV71_Instance1/Webclient?company=CRONUS&report=6

If the GETURL function is called with invalid parameters, it will return an empty string. In that case, you can find the related error text by calling the GETLASTERRORTEXT function.

Function Call

Error Message

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::Table,27)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::Codeunit,5065)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::Query,9150)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::Web,COMPANYNAME,OBJECTTYPE::XmlPort,5150)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Table,27)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Page,27)

The Page object, 27, that is specified for the GetUrl function has not been published in the Web Services table.

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Report,6)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Codeunit,5065)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::Query,9150)

The Query object, 9150, that is specified for the GetUrl function has not been published in the Web Services table.

GETURL(CLIENTTYPE::OData,COMPANYNAME,OBJECTTYPE::XmlPort,5150)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Table,27)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Page,27)

The Page object, 27, that is specified for the GetUrl function has not been published in the Web Services table.

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Report,6)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Codeunit,5065)

The Codeunit object, 5065, that is specified for the GetUrl function has not been published in the Web Services table.

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Query,9150)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::XmlPort,5150)

The specified object type parameter for the GetUrl function is not valid.

GETURL(CLIENTTYPE::SOAP,COMPANYNAME,OBJECTTYPE::Page,27,record)

You cannot specify a record parameter for the GetUrl function when the object type is SOAP

NAV Usage Example

The following example shows how to use the GETURL function in codeunit 440 to ensure that the notification mail in Document Approvals can link to both the Microsoft Dynamics NAV Windows client and the Microsoft Dynamics NAV web client:

This resulting UI looks as follows.

The first link opens the approval document in the Microsoft Dynamics NAV Windows client. The second link (Web view) opens the same document in the Microsoft Dynamics NAV web client.

 

Best regards,

Mike Borg Cardona, Bogdana Botez, and the Microsoft Dynamics NAV team

Uploading large files from the Web client

$
0
0

If like me you’ve been tinkering with your Lumia 1020 smartphone and snapping some high-resolution photos over the holidays, you may be wondering whether Microsoft Dynamics NAV 2013 R2 can handle such large photo sizes on the Microsoft Dynamics NAV Web client.

Let's say you want to capture photos of your product catalogue (using the Photo action on the Item card). When uploading large files such as high-resolution photos, Microsoft Dynamics NAV will show the following error:

The file that you are trying to use is too large. Clearly. Whilst Microsoft Dynamics NAV defaults to 4MB uploads on the Web client, you will be pleased to hear it takes just a minor tweak to your IIS configuration to support larger file uploads: 

  1. Launch the IIS Configuration Manager (in this example I am using IIS 8 but similar steps apply to earlier versions).
  2. Select the Microsoft Dynamics NAV web site in the left pane and then double-click Request Filtering.
  3. Right-click and select Edit Feature Settings… in the context menu.
  4. Set field Maximum allowed content length to an appropriate value such as 100000000 (in bytes) and click OK.
  5. Now select the Microsoft Dynamics NAV web site in the left pane again and double-click the Configuration Editor.
  6. Make sure that the From field is set to "Microsoft Dynamics NAV 2013 R2 Web Client Web.config"
  7. Set field Section to system.web/httpRuntime
  8. A number of properties should appear. Set maxRequestLength to an appropriate value such as 100000 (this time it is in kilobytes) and click the Apply action on the right.

 

The new settings should take effect immediately without the need for an IIS or site reset.

Go ahead and try to upload a large file now and BANG! the upload succeeds.

You can read more about the maxRequestLength property here:

http://msdn.microsoft.com/en-us/library/system.web.configuration.httpruntimesection.maxrequestlength(v=vs.110).aspx

Best regards,

Lukasz Zoglowek and Mike Borg Cardona

Important Information: In a live database with active users connected, changing an object multiple times or compiling all objects can cause data loss in NAV 2013 R2

$
0
0

You may experience data loss in Microsoft Dynamics NAV 2013 R2 in the following situations, separately or in combination:

  • Changing an application object more than once, for example by two different developers, in the same database connected to the same Microsoft Dynamics NAV Server instance while users are working in the system.
  • Compiling all application objects, and thereby potentially changing objects more than once, in a database that is connected to a Microsoft Dynamics NAV Server instance that users are accessing.

To avoid the problem, we advise that you work according to the following best practices:

  • Application developers must be working on their own database and connect to their own Microsoft Dynamics NAV Server instance. When you deploy changes to the live production database, make sure that no users are working in the system.
  • You must compile objects only when no users are working in the system, including users connecting through NAS. 

With update rollup 5 for Microsoft Dynamics NAV 2013 R2 - KB 2937999, this issue has been fixed and you do not have to take the precautions described above. However, we still advise that you separate development from production databases.

Please note that implementing update rollup 5 will require a database conversion.

 

 

SAVEASWORD and the Fixed Header design

$
0
0

Microsoft Dynamics NAV 2013 and Microsoft Dynamics NAV 2013 R2 implement a new rendering extension and give users the capability of saving the report into Word .DOC (NAV 2013) or .DOCX (NAV 2013 R2) format. The same action can be done through C/SIDE by using the SAVEASWORD C/AL statement.  For more information, see the MSDN Library:  http://msdn.microsoft.com/en-us/library/hh165802(v=nav.71).aspx

However, the Report Viewer Word rendering engine has some caveats related to its own proper design that you should be aware of.

When you are invoking the Word rendering engine, the Header and Footer are transformed into static Word Header and static Word Footer. The SSRS team took the basic Header / Footer Word concept to have these as static sections. Therefore the FIRST statement generated in report viewer is the one that will be sent to the Word document, no further processing will be made typically to the value expression of any control in Page Header / Page Footer. In other words, it is like saying that all the ReportItems!Field.value or Code.Function() in Page Header or Page Footer are evaluated only one time and never changed at runtime.

Let's illustrate the challenge with the Word rendering engine and how it handles Page Header and Page Footer within the context of Microsoft Dynamics NAV. I am using the following to demonstrate this behavior in few seconds:

  1. Open the Microsoft Dynamics NAV 2013 or Microsoft Dynamics NAV 2013 R2 Windows client.
  2. Go to Posted Documents (in the navigation pane).
  3. Go to Posted Sales Invoice list.
  4. Select all invoices, and then choose Print.
  5. In the report request page, choose Print and then choose Microsoft Word… and you have your repro: All the invoices belong to the first customer and have the same invoice no., etc.

This designed limitation is described in the Page Headers and Footers section of the Exporting to Microsoft Word (Report Builder and SSRS) article in the TechNet Library:

http://technet.microsoft.com/en-us/library/dd283105(v=sql.110).aspx

The article points this out: "However, when a page footer or page header contains a complex expression that evaluates to different values on different pages of a report, the same value might display on all report pages. "

Since this is a declared design limitation, you could target SAVEASWORD to those reports where:

  • The report does not use Page Header / Page Footer

Or

  • The report does have a Static Page Header / Page Footer

If you would like to have this design changed in future version from SQL Server Reporting Service (SSRS) Team, I would encourage you to log a request of design change into MSCONNECT or vote for an existing one, if any.

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

Duilio Tacconi (dtacconi)

Microsoft Dynamics Italy

Microsoft Customer Service and Support (CSS) EMEA

A special thanks to Peter Borring Sørensen & Torben Wind Meyhoff from the Microsoft Dynamics NAV Core Team

RDLC Report and Performance in Microsoft Dynamics NAV

$
0
0

It has been a while since I last blogged and I am taking the chance now to post on a very delicate argument. The main focus of my dissertation is about performance (Out Of Memory exception, typically) with Microsoft Dynamics NAV 2009 R2, Microsoft Dynamics NAV 2013, and Microsoft Dynamics NAV 2013 R2.

I would encourage you to post your comments and thoughts. Have a good read.

Microsoft Dynamics NAV 2009 R2 Considerations (RDLC 2005 - Report Viewer 2008)

All the Microsoft Dynamics NAV 2009 stack (RTM, SP1, R2) is built and sealed for the x86 platform. This means that both client (Microsoft.Dynamics.NAV.Client.exe) and server (Microsoft.Dynamics.NAV.Server.exe) are 32bit components of the NAV platform. RDLC Reporting (Enhanced Reporting, in the recent NAV terminology) in Microsoft Dynamics NAV 2009 is made of a Report Viewer .NET Control targeted for WinForm, and Microsoft Dynamics NAV casts this control into a Microsoft Dynamics NAV modal page (that is a WinForm, roughly speaking) within the RTC boundaries.

Report Viewer works, in principle, accepting 2 items:

-          a metadata definition plain XML file (Report.rdlc) that define the structure of the report rendering runtime  

-          a Dataset that is a serialized XML file that contains the data to be rendered in the way defined in the rdlc file definition

With Microsoft Dynamics NAV 2009, Report Viewer works client-side to render the report to the user (Preview Layout rendering extension) and therefore it needs to have both RDLC definition and dataset streamed completely from the Server to the Client. This streaming process of Client memory population, since Microsoft Dynamics NAV 2009 SP1, is made with a chunking method that can be resumed in shorts as per below.

SQL Server process and generate a complete Result Set. The Result Set is sent to the Microsoft Dynamics NAV Server as normal TCP packets informations and the Microsoft Dynamics NAV Server, meanwhile receiving these packets from SQL Server, is sending this Result Set in chunks to the client, clearing the Microsoft Dynamics NAV Server memory once the packet is received from the client. This has been introduced to avoid memory leak server side that works only as routing point for packets / chunks from SQL Server to the Microsoft Dynamics NAV Windows client. If you open task manager both in the Middle Tier machine and Client machine, meanwhile processing a Heavy report (or whatever report), you might notice that the memory footprint server side is constant and pretty low while the Client one is growing and growing in consumption until it reaches a physical limit.

When it reaches its physical limit, you receive the typical error message like the one shown below (explicit Out Of Memory exception)

And, most of the times, report viewer continue clearing the error message and simply display a single blank page (implicit Out Of Memory exception) or several pages with mixed random string value assignments (blurred Out of Memory exception).

I do not want to go more deep into the technicalities that lies beneath but you have to consider the following:

  1. The Microsoft Dynamics NAV 2009 R2 Role Tailored client is a 32bit application (with a limit, on the chart, of 2GB memory per process).
  2. The Microsoft Dynamics NAV 2009 R2 Role Tailored client and report(s) share the same memory Application Domain (this means the same memory stack).
  3. Report Viewer control run in a sort of sandbox mode inside the Microsoft Dynamics NAV WinForm so that the memory consumption is even more limited (approx. 1GB).

Based on the assumption above my studies on performance related to heavy reports have been the following:

  1. Report Viewer Preview rendering extension within Role Tailored Client is raising an Out Of Memory exception when Client process memory reaches 0.8 – 1.1 GB approx. (this differs between multiple factors like e.g. OS, Hardware type, Resources, etc.)
  2. Considering a typical Microsoft Dynamics NAV dataset (60 – 80 columns on average) there is a potential risk of Out Of Memory between 40K up to 100K rows range. This depends on number of columns in the dataset and quality of columns (e.g. which data type they belongs, if and how this is populated, etc.).

If you pack up all these considerations, these are the actions that you might take (or have to) depending on your scenarios within the Microsoft Dynamics NAV 2009 R2 stack:

  1. If your report is raising an Out Of Memory exception in a range lower or close to 80/90K rows then you can try to optimize the report by reducing the Dataset. Reducing the dataset means :
    1. Write optimal code for RDLC Report (e.g. use CurrReport.SKIP when needed, avoid use data items for CALCSUMS and use record AL variables instead, rewrite the report to use drill-through to enable getting to details if required in the report - so still possible to move calculations to CSIDE – or refactor to use hyperlink to another report for details, etc.)
    2. Reduce the Dataset Columns (e.g. eliminate Section control that you do not use with RDLC report)
    3. Reduce the Dataset Rows (refactor as much as it possible to push in the dataset only the data that need to be printed)
  2. If your report is already in a range equal or higher then 80/90K then you have no other choices with NAV 2009 R2 than the following :
    1. Delete RDLC Report layout and enable Classic Client report fall back (this is the solution that I will warmly suggest and it is a really finger snap solution)
    2. (this is pretty obvious) Apply filters in the request page (or through AL Code) in order to reduce the amount of rows in the dataset and instead of print the report in one single shot, print it N times.

And this is all about the Microsoft Dynamics NAV 2009 R2 stack and how to solve / workaround the problem in the feasible (and easiest way) within this version.

Microsoft Dynamics NAV 2013 (RDLC 2008 – Report Viewer 2010) / NAV 2013 R2 (RDLC 2010 – Report Viewer 2012) is another story and challenge type.

To resume, the milestone changes between Microsoft Dynamics NAV 2009 and Microsoft Dynamics NAV 2013 (and R2) are the following:

  1. Microsoft Dynamics NAV Server is now 64bit (finally…) while the Windows client still remains as 32bit application. This means that the client is still a physical bottleneck and are still valid the considerations related to memory footprint and dataset volume as reported previously for Microsoft Dynamics NAV 2009 R2.
  2. You cannot anymore enable Classic client report fallback but you have to use RDLC Report in any occasion.

With these 2 new variables or constraints in mind, below how you could workaround / resolve the performance problem with Microsoft DynamicsNAV 2013  / Microsoft Dynamics NAV 2013 R2:

  1. Same considerations about Optimizing reports: if you receive (or think of receiving) an Out Of Memory exception you might go for optimize the report as much as you can IF you forecast that in the end your dataset will never ever exceed 70/90K rows.
  2. If you have heavy reports with a dataset volume higher than 70/90K rows then this is what you could do:
    1. Filter data and print the report N times, wherever possible (use common sense)
    2. Use the Job Queue to enable Server Side Printing. What is Server Side Printing? It is simply running Report Viewer totally in the background through NAS Services (that is using Background Sessions through STARTSESSION AL statement). Running Server Side means running under 64 bits context and therefore Report Viewer (“.NET component targeted for any CPU” = 64 bit enabled) will use ALL the memory available from the OS (e.g. if you have 32 GB it could reach up to consume all of these if you have to work with several MILLION of dataset rows – I have seen it with my own Italian eyes - ) and you will succeed in PRINT the report or, better, use SAVEASPDF to generate a PDF file to be consumed by the user.
    3. Use STARTSESSION AL statement as you like in your own custom code after gathering user filters and parameter and pass this to a Codeunit that does filter record(s) and run a SAVEASPDF in the background as per your exotic flavor.

THE FUTURE

The Microsoft Dynamics NAV Core team is fully aware about these scenarios and working hard on improving the RDLC Report performance and experience in future versions of Microsoft Dynamics NAV

NOTE:

In this blog post you will find a set of objects (1 Report, 1 Codeunit, 1 Page) to easily simulate an Out Of Memory exception or Save as PDF the report in background.

Just import these NAV object set and run Page 50666. You can choose to simulate an Out Of Memory exception by clicking the appropriate Action and then Preview the Report or you can choose to SAVEASPDF the same Report via enabling a Background Session that would do this action Server Side. 

Be sure to have at least 4 GB of Available Memory Server Side and just wait for the background session to end its activity and stream out the content of the report (this should take close to 5/6 minutes with a standard Cronus database, depending on resources).

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

Duilio Tacconi (dtacconi)

Microsoft Dynamics Italy

Microsoft Customer Service and Support (CSS) EMEA

A special thanks to Peter Borring Sørensen & Torben Wind Meyhoff from the Microsoft Dynamics NAV core team.

NAV Design Pattern of the Week – the Hooks Pattern

$
0
0

A new series of NAV Design Patterns is just starting. To make a good beginning, the first one is not chosen randomly. It is indeed a very powerful pattern – the “Hooks”, by Eric Wauters. It is also different from most patterns you’ve seen so far, in that it does not exist in the core product. “How can it be? A powerful NAV pattern which is not in NAV?”, you wonder.

This is exactly the point of the Hooks. It is a partner pattern – something you, as a partner, can use, when you implement your customization. And if done correctly, it has the potential to make the next upgrade superfast.   

Are you having a hard time installing update after update, rollup after rollup? Does it consume more resources than you had wished? Is the customization code entangled with the core product code, in such a way that each update requires complicated merges, with the developer trying to figure out which code goes where?

To keep it short, here it is – the Hooks pattern. Most powerful when starting a new customization, as it can be implemented fully. It will keep things simple. As for the legacy customizations... it is still possible to use it, by refactoring key areas such as areas with the most update merge issues.

Hooks Pattern

by Eric Wauters (waldo), Partner-Ready-Software

Meet the Pattern

By minimizing the code in already existing application objects, you will make the upgrade process much easier, and all customization business logic will be grouped in new objects.  When using atomic coding, it will be very readable what is being customized on a certain place in an existing part of the application.

To minimize the impact of customizations, the idea of hooks is:

  • First of all, name the places in the already existing code where customization is needed;
  • Second, place your business logic completely outside the already existing application code.

Know the Pattern

When doing development over years, by different developers with different mindsets, the standard codebase gets changed a lot, adding multiple lines of code, adding local and global variants, adding or changing keys, changing existing  business logic, … .  In other terms, the standard text objects are being changed all over the place.. .

After years, it's not clear why a change was done, and what was the place where the change was intended to be done.  And the latter is quite important in an upgrade process, when code in the base product is being refactored: if the exact place of the posting of the Customer Entry is being redesigned to a separate number, the first thing I need to know, is that I did a certain change at the place: "where the posting of the Customer Entry starts".  The definition of that place, we call a "Hook".

I recommend to use this concept on the following:

  • All objects of the default applications that need to be changed
  • On objects that should not hold any business logic (like tables, pages, XMLPorts)

Use the Pattern

Step 1 - if it doesn't exist yet - you create your Hook codeunit.  As the name assumes .. this is always a codeunit.  We apply the following rules to it:

  • One Hook always hooks into one object.  Which basically means that I will only declare this new codeunit in one other object (which is its parent object)
  • The naming convention is: The_Original_Object_Name Hook.  Naming conventions are important, just to find your mapped object, and also to be able to group the Hooks.

Step 2, you  create the hook, which is basically a method (function) in your codeunit.  The naming is important:

  • The naming of the hook should NOT describe what it is going to do (So, examples like CheckMandatoryFields, FillCustomFields should not be used as a hook)
  • The naming of the hook should describe WHERE the hook is placed, not what the hook will be doing (as nobody is able to look into the future .. :-))
  • To help with the naming, it is a good convention to use the "On"-prefix for these triggers.  This way, it's very clear what are hooks, and what aren't..

Step 3, it's time to hook it to its corresponding object and right place in the business logic of that object.  You do this by declaring your codeunit as a global in your object, and using the created hook function on its place in the business logic.  This way, these one-liners apply:

  • A Hook codeunit is only used once in one object only (its corresponding object)
  • A Hook (function) is used only once in that object.  As a consequence, changing the parameters has no consequence: you only need to change one function-call
  • The codeunit is declared as a global.  That exact global is the only custom declaration in the existing object .. Everything else is pushed to the hook-codeunit.

Step 4, implement your business logic in the hook.  Do this in the most atomic way, as there is a good chance that this same hook is going to be used for other business logic as well.  Best is to use a one-line-function-call to business logic, so that the Hook Function itself stays readable.

Example

Suppose, we want to add business logic just before posting a sales document.  In that case, we have to look for the most relevant place, which is somewhere in the "Sales-Post" codeunit.  So:

Step 1: create codeunit Sales-Post Hook:

Step 2: create the hook function OnBeforePostDocument:

Step 3: declare a global in the Sales-Post codeunit, called SalesPostHook.  Then,...

Continue reading on the NAV Patterns Wiki...

 

Best regards,

The NAV Patterns team


Formatted decimal values dropping symbols in RDLC

$
0
0

I have seen several reports about the Microsoft Dynamics NAV RDLC reports dropping symbols out of decimals when printing out a report.  One of the most common occurrences, that I have seen, is when the de-CH locale is being used by the end user.  After some investigation, it appears that this problem is due to some changes between Windows 7 and Windows 8 (and Windows 2012). 

Windows 7 (de-CH): 1’745.55

Windows 8 (de-CH): 1 745.55                      [Note: the thousand separator is not printed]

During my investigation I found some references to using “Custom Locales” to resolve this issue.  In order to use a custom locale you must build a replacement locale using the Locale Builder 2.0 tool.  There is a link to this tool at the following referenced URL.  Once this is created, you then need to replace your existing locale.  If you are interested in more information, please follow this link (http://msdn.microsoft.com/en-us/library/windows/desktop/dd317785(v=vs.85).aspx).

Another option ...

There is another way that doesn’t involved changing out your locale.  All it takes is a little code and some familiarity with editing RDLC reports.  For this blog I am going to use Microsoft Visual Studio 2012 and Microsoft Dynamics NAV 2013 R2.  I have outlined the required steps below …

1) After designing a report in the Microsoft Dynamics NAV 2013 R2 development environment, load the RDLC Layout (View\Layout).  Depending on your setup, this will either load Microsoft Visual Studio 2012 or Report Builder 3.0 (if you are using Microsoft SQL Server 2012).

2) Once the layout is open, display the properties of the report.  This can be done by opening the Properties window (CTRL+W,P) and clicking in the blank area around the Report or right click in the blank area around the report and select Report Properties.

 
3) You should see a Properties window like this …

Notice the highlighted Code property – click on the value and then click on the ellipse () that appears.  This will open up the Code window.

 

4) Scroll to the bottom of the existing code functions and add the following code …

Public Function FormatDecimalString(ByVal Amount as Decimal, ByVal FormatString as String,ByVal Language as String) as String

  Dim CultureInfo as New System.Globalization.CultureInfo(Language)

  Return Amount.ToString(FormatString,CultureInfo)

End Function

The new function will convert a decimal amount to a specific cultural locale based on the format string provided in the dataset.

5) Next, select a textbox that needs to have its printed format corrected.  Once the textbox has its border highlighted, right click and select the Expression option.  This will open the Expressions window.

 

6) Now, copy the following code into the “Set expression for: Value” textbox.  The following code will need to replace the reference to the current value.

=Replace(Code.FormatDecimalString(<FIELDNAME.VALUE>,<FIELDNAMEFORMAT.VALUE>,”en-US”),”,”,”’”)

Make sure to replace <FIELDNAME.VALUE> with the appropriate value reference and replace <FIELDNAMEFORMAT.VALUE> with the appropriate format reference.  This code will replace the comma in the en-US format with an apostrophe to match the de-CH format.

For Example:  If you use the GLBalance field from the Dataset of REPORT 4 – Detail Trial Balance, then the code would look like this …

=Replace(Code.FormatDecimalString(Fields!GLBalance.Value,Fields!GLBalanceFormat.Value,”en-US”),”,”,”’”)

When using the de-CH locale, the number format is similar to what en-US uses, except that the apostrophe in the de-CH format is replaced with the comma in the en-US format.

de-CH 1’745.55 ==> en-US 1,745.55

The basic premise behind these code changes is to convert the decimal value to a locale format that is then returned as a text.  Once in the text format, it is now possible to replace the values for the thousands and decimal separators as needed.

 Hopefully, this will make your reporting experience in Microsoft Dynamics NAV 2013 R2 a little easier.

 

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

The New Table Synchronization Paradigm in Microsoft Dynamics NAV 2013 R2

$
0
0

Microsoft Dynamics NAV 2013 R2 was dispatched with a brand new feature that introduces big challenges to all of the Microsoft Dynamics NAV channel: Multitenancy. In simple words, multitenancy allows partners to deal with hosting scenarios on-premises and in cloud services in an easier and more flexible way than in the past.

Before Microsoft Dynamics NAV 2013 R2, partners and customers could only use a single-tenant scenario (here also called Legacy mode).

Below a short explanation how table synchronization used to work in earlier versions.

Microsoft Dynamics NAV 2009 / Microsoft Dynamics NAV 2013

  1. At object import/compile, C/SIDE checks the Object Metadata version in working memory and compares it to the version in Object Metadata table to decide if and what kind of change is made.
  2. Schema changes are DIRECTLY APPLIED to the SQL Server database by C/SIDE if there is no breaking schema change, otherwise an error will be thrown by C/SIDE depending on SQL Server error catch.
  3. Object Change Listener is checking for changes in metadata, then updating Microsoft Dynamics NAV Server Cache with data from Object Metadata table if the change was detected.

A synchronization failure would typically be reported with an error like “The Object Metadata does not exist. Identification and values …“ when running the Microsoft Dynamics NAV Windows client.

The multitenancy feature has also changed the design how Microsoft Dynamics NAV developers has to deal with object changes, overall related to table objects. Multitenancy implies that the table structure definition has to be stored in the application database and this needs to be applied on one or more separate storage databases called Tenants. From a development perspective, this means that any modification that are made to a table object in C/SIDE are NOT DIRECTLY applied to the SQL Server structure but there is a need of performing a secondary action to apply and made persistent these modification at SQL Server side: this process is called Synchronization. Microsoft Dynamics NAV 2013 R2, then, comes with a time decoupling between table metadata creation (C/SIDE) and data structure changes (SQL Server).

In order to simplify the current design, the Microsoft Dynamics NAV development team decided to handle single- and multitenant scenarios in the same way (roughly speaking a single-tenant / Legacy mode is handled as a multitenant scenario with a single tenant database constantly mounted against an application database).

Below a short explanation how this is working in practice.

Microsoft Dynamics NAV 2013 R2

SCENARIO 1:

  • Single-tenancy / Legacy mode
  • “Prevent data loss from table changes” = Yes (default):

 

  1. At object import/compile, C/SIDE checks the Object Metadata version in working memory and compares it to the version in Object Metadata table to decide if and what kind of change is made. (Same as in Microsoft Dynamics NAV 2009 and Microsoft Dynamics NAV 2013)
  2. C/SIDE then CALLS THE Microsoft Dynamics NAV Server to check for breaking schema changes in SQL Server structure.
    If C/SIDE is unable to call the Microsoft Dynamics NAV Server or if a breaking schema change is attempted (action that cannot performed due to the current SQL Server structure such as deleting a field containing data): a C/SIDE error is reported accordingly and changes to Object Metadata table will not be committed.
    If it is evaluated as not attempting a breaking schema change in SQL Server then metadata from C/SIDE working memory is saved and committed to Object Metadata table.
    PLEASE NOTE: at this stage NO CHANGES ARE MADE TO THE SQL SERVER DATA STRUCTURE.
  3. When prompting for SYNCHRONIZATION, Microsoft Dynamics NAV Server then compares Object Metadata table with Object Metadata Snapshot table content. Any difference in the value for the “Hash” field is a flag to Microsoft Dynamics NAV Server that a change exists and should be subsequently applied physically SQL Server side as structural changes.

Prompting for Synchronization happens when

-         Performing ANY Microsoft Dynamics NAV client action.

For example, if a user opens a Microsoft Dynamics  NAV Windows client, then Microsoft Dynamics NAV Server is starting applying the relevant structure changes to SQL Server, and the Microsoft Dynamics NAV Windows client is not shown until all the changes are done on SQL Server side.

OR

-         Running the Sync-NAVTenant Windows PowerShell cmdlet.

SCENARIO 2 (DEPRECATED):

  • Single-tenancy / Legacy mode
  • “Prevent data loss from table changes” = No (Manually opted, not persistent)

IMPORTANT NOTICE:

Setting the “Prevent data loss from table changes” C/SIDE switch to “No” has been intended to be used as last resource in a pure multitenancy scenario and in Test or Staging environments when partners does not have any business data database mounted against the application database. All other usages that deviate from this statement might lead to unpredictable results and even undesired data loss scenarios in upgrades or, even worse, production environments.

Never change for any reason this parameter to “No” when developing against a single-tenant / Legacy mode database.

  1. At object import/compile: C/SIDE checks the Object Metadata version in working memory and compares it to the version in Object Metadata table to decide if and what kind of change is made. (Same as in Microsoft Dynamics NAV 2009 and Microsoft Dynamics NAV 2013)
  2. C/SIDE DOES NOT CHECK FOR ANY BREAKING SCHEMA CHANGES IN SQL SERVER but simply FORCES COMMIT of metadata from C/SIDE cache TO the Object Metadata table.
  3. When prompting for SYNCHRONIZATION, Microsoft Dynamics NAV Server then compares Object Metadata table with Object Metadata Snapshot table content. Any difference in the value for the “Hash” field is a flag to Microsoft Dynamics NAV Server that a change exists and should be subsequently applied physically SQL Server side as structural changes.

Since no validation is made against SQL Server (“Prevent data loss from table changes” was set to “No”) there might be chances that this will result in:

  • Data Loss
    There are few specific cases where data is dropped in this scenario:
    • The primary key is detected as being no longer unique
    • Data per Company is changed from Yes to No and more than one company contains data
    • One or more fields are deleted
    • One or more field data type is/are changed
  • Missing Synchronization
    Activities cannot be completed since SQL Server prevents these actions that would break the data structure and therefore no Microsoft Dynamics NAV Windows client or Web client can connect to the database. The partner or customer has to resolve these missing synchronization issues before moving forward or fall back to a backup where these issues does no longer exists

SCENARIO 3:

  • Multitenancy
  • “Prevent data loss from table changes” = Yes (default):

Same as Scenario 1 for point 1. and point 2.

When prompting for SYNCHRONIZATION, changes will be triggered and applied to the SQL Server data structure.

Prompting for synchronization in a pure multitenant deployment happens when

-         Performing ANY Microsoft Dynamics NAV client action

OR

-         Running the Sync-NAVTenant Windows PowerShell cmdlet

OR

-         Mounting a tenant database

 

Based on the scenario depicted above, there might be risks of data loss and/or missing synchronization issues if handling C/SIDE development (namely dealing with Table objects) in a way that deviate by the prospected paradigm.

Data loss issues:

These might arise typically in one of the following scenarios:

  • Direct removal of rows from the Object Metadata table in SQL Server
  • Stretched / Borderline scenarios that implement platform files with a Build No. lower than 36281KB 2934571 as described in this blog post.

 

Synchronization issues:

These might arise typically in one of the following scenarios:

  • The Microsoft Dynamics NAV Server service account has insufficient permissions
    The service account must be added to “db owner” SQL Server role for the Microsoft Dynamics NAV tenant Database.
  • Stretched / Borderline scenarios that implement platform files with a Build No. lower than 36281KB 2934571 as described in this blog post.
    With a lower build number, you might get into one of the following scenarios:
    • When several developers commit changes at the same time in the same database / tenant while synchronization is running, this might lead to metadata corruption. (Object Metadata table now is locked for committing  changes).
    • Doing actions like FOB Import > Replace > SaveAs  and then Import again the saved FOB was causing a metadata corruption.
  • SQL Connection Timeout meanwhile performing an operation, such as when SQL Server schema changes require drop and build of indexes on large tables.
    To resolve this issue it is necessary to increment the following parameter in the Microsoft Dynamics NAV Server CustomSettings.config file
     <add key="SqlCommandTimeout" value="10:00:00" />

Development Environment best practice

thinking about potential data loss and synchronization issues is a brand new big challenge in the development environment, and so some consideration and following best practice might be advisable. These applies to developing solutions for both single- and multitenant deployments.

  1. Do not use Build No. lower than than 36310KB 2934572
    As a partner, you take this as the "RTM Build No." starting point for NAV 2013 R2 and deploy this platform hotfix in the future projects while you also convert existing installations.
    NOTE: As per common best practice, we recommend that you download / request / test and deploy the latest platform hotfix for Microsoft Dynamics NAV 2013 R2. This will contain correction for minor issues not directly or just slightly related to synchronization scenarios.
  2. Never-ever change “Prevent data loss from table changes” to “No”.
    This have been noticed as one of the major source of potential data loss and missing synchronization for NAV 2013 R2 databases.
  3. Make sure that the Microsoft Dynamics NAV Server service account has been granted the “db owner” role in SQL Server.
  4. Increment the SQL Server Command Timeout parameterin the Microsoft Dynamics NAV Server configuration file that you use in development to a very high value (such as 10:00:00)
  5. For large Microsoft Dynamics NAV objects OR a high number of table modifications, do NOT use a Microsoft Dynamics NAV client action to prompt for synchronization but it is warmly preferable to use the Sync-NAVTenant Windows PowerShell cmdlet. (This is a typical scenario related to upgrades).
  6. For big batch of FOB files that are making a high number of table modifications, be sure to have this tested on a safe staging environment and import, where possible, the Table Objects in smaller chunks and synchronize them after importing every single chunk of Microsoft Dynamics NAV objects.
  7. For important changes in several table structures, such as when upgrading from previous version, it would be good to run a SQL Server Profiler trace after prompting for synchronization to check what is running on the SQL Server side and keep the synchronization monitored until it ends.

Recommended Events:

  • SP:StmtCompleted
  • SQL:StmtCompleted

Recommended Column Filters:

  • DatabaseName   Like <DatabaseName>
  • TextData       Not Like  SELECT %

Bottom line. Worth mentioning that if a Microsoft Dynamics NAV Client hang / disconnect happens due to a missing synchronization issue or there were a synchronization transaction running behind the transaction rollback SQL Server side will take a higher amount of time in comparison with the same committed transaction, depending on the type of changes, resources available, etc.

Just in case you fall back in this situation, it is warmly advisable to do not stop nor restart Microsoft Dynamics NAV Server and check through a SQL Server Profiler trace and/or via SQL Server Management Studio if the transaction has successfully rollback.

Another blog post will follow this one, related to synchronization challenges and best practice while upgrading to Microsoft Dynamics NAV 2013 R2 from previous versions.

 

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

 

Gerard Conroy - Microsoft Dynamics UK

Abdelrahman Erlebach - Microsoft Dynamics Germany

Duilio Tacconi - Microsoft Dynamics Italy

Jasminka Thunes - Microsoft Dynamics Norway                   

Microsoft Customer Service and Support (CSS) EMEA

 

A special thanks to Jorge Alberto Torres & Jesper Falkebo from the Microsoft Dynamics NAV development team

NAV Design Pattern of the Week - Temporary Dataset Report

$
0
0

After a short delay, here is the latest design pattern, brought to you by the Microsoft Dynamics NAV Design Patterns team.

Meet the Pattern

This pattern generates the data to be displayed dynamically by combing/processing several data sources. It then displays the resulting dataset without writing to the database.

 

Know the Pattern

 

While writing reports in Microsoft Dynamics NAV, we have the luxury of using a built-in iterator. So, once we define the dataitem and the ordering, the runtime takes care of the iteration.

 

The iterator has one shortcoming: It can only run through records written into the database. There are situations, however, where we want to display temporary datasets created at runtime by processing data from different sources. That is where the Temporary Dataset Report pattern can be used.

 

Use the Pattern

 

 This pattern takes a two-step approach to displaying the data:

 

  • Parse the data sources to create a record buffer in a temporary record variable.
  • Iterate through a data item of the Integer table and display one record from the temporary recordset in each iteration.

 

Step 1: Combining data sources to create a dataset

 

In this step, we would process the existing data to create a temporary recordset. The three most common techniques to do this are discussed in the following paragraphs.

 

The first technique is mostly used when we want to build the report based on one or more source tables. A lot of processing is required, and we therefore want to store and display information from a temporary recordset. With this technique, we create a data item of the source record and then iterate through this data item to create the temporary recordset. An advantage of this technique is that it allows the user to perform additional filtering on the data source tables since they are added as additional data items and therefore will have their tabs on the request page by default.

 

 

The second technique was made available with NAV 2013 when queries were introduced as a tool to help us combine data from different sources.

Read the entire pattern on NAV Wiki...

 

Best regards,

Abhishek Ghosh from the Microsoft Dynamics NAV Design Patterns team

Using C/AL Query Objects Instead of Nested Loops

$
0
0

After a bit of a delay, here is the latest Microsoft Dynamics NAV design pattern, brought to you by the NAV Design Patterns team.

Meet the Pattern

This pattern shows how the new query object type introduced in Microsoft Dynamics NAV 2013 allows you to replace costly loops when inspecting data from two or more tables.

Know the Pattern

One of the core operations in a relational database is joining two or more tables. For example, you might need to extract all sales lines in the database together with information regarding the related sales header. This requires joining the Sales Header and Sales Line tables using Sales Header No. as the connecting field.

The join operation has traditionally been done in C/AL by record looping. When Microsoft Dynamics NAV 2013 introduced the query object, it allowed us to produce a data set that is the result of a join operation between two or more tables. This simplifies the problem of finding related records in two tables linked through a foreign key.

Pattern Elements

1.       Two or more tables that contain records linked through a foreign key: Table 1, Table 2, Table n.

2.       A query object Query X, that joins Table 1, Table 2, etc. based on the connecting key.

3.       A processing codeunit that loops through the query records (or any other code-bearing object).

Pattern Steps

1.       Run the query on the connected tables.

2.       Loop through the records returned by the query.

3.       Process the records. 

The following  diagram illustrates the elements of the pattern.  

Use the Pattern

The Bank Acc. Reconciliation Line table (274) and the Bank Account Ledger Entry table (271) are connected through the Bank Account No. field. Identify the matching pairs of records based on having the same remaining amount and transaction date.

Solution Using Nested Loops

The classic C/AL approach is to:

1.       Set the necessary filters on the left table, i.e. table 274.

2.       Loop through the filtered records.

3.       For each record in the filter, find the related records in the right table (table 271) and set the required filters on it.

4.       For each pair of records from the left and right table, decide if they are a solution and if so, apply them to each other.

VAR

  BankAccRecLine@1005 : Record 274;

  BankAccLedgerEntry@1006 : Record 271;

  BankAccEntrySetReconNo@1007 : Codeunit 375;

 

BEGIN

  BankAccRecLine.SETFILTER(Difference,'<>%1',0);

  BankAccRecLine.SETRANGE(Type,BankAccRecLine.Type::"Bank Account Ledger Entry");

  IF BankAccRecLine.FINDSET THEN

    REPEAT

      BankAccLedgerEntry.SETRANGE("Bank Account No.",BankAccRecLine."Bank Account No.");

      BankAccLedgerEntry.SETRANGE(Open,TRUE);

      BankAccLedgerEntry.SETRANGE("Statement Status",BankAccLedgerEntry."Statement Status"::Open);

      BankAccLedgerEntry.SETFILTER("Remaining Amount",'<>%1',0);

      IF BankAccLedgerEntry.FINDSET THEN

        REPEAT

          IF (BankAccRecLine.Difference = BankAccLedgerEntry."Remaining Amount") AND (BankAccRecLine."Transaction Date" = BankAccLedgerEntry."Posting Date") THEN                        BankAccEntrySetReconNo.ApplyEntries(BankAccRecLine,BankAccLedgerEntry,  Relation::"One-to-One");

        UNTIL BankAccLedgerEntry.NEXT = 0;

    UNTIL BankAccRecLine.NEXT = 0;

END;

Solution Using a Query

The new query-based approach involves:

1.       Define a query that returns the full filtered join of tables 271 and 274.

2.       Loop through the records returned by the query.

3.       For each query record, decide if it represents a solution and then connect the two table records that formed it through an application.

VAR

  BankRecMatchCandidates@1001 : Query 1252;

  BankAccEntrySetReconNo@1007 : Codeunit 375;

BEGIN

BankRecMatchCandidates.SETRANGE(Rec_Line_Bank_Account_No,BankAccReconciliation."Bank Account No.");

BankRecMatchCandidates.SETRANGE(Rec_Line_Statement_No,BankAccReconciliation."Statement No.");

 

IF NOT BankRecMatchCandidates.OPEN THEN

  EXIT;

 

WHILE ...

 

Read more on NAV Design Patterns Wiki...

Best regards,

Bogdan Sturzoiu, at Microsoft Development Center Copenhagen

Journal Error Processing

$
0
0

Today’s pattern changes the paradigm of how we’ve done error processing in Microsoft Dynamics NAV earlier by providing less-intrusive, more user-productive error processing.

Meet the Pattern

This pattern describes an optimized way to handle invalid, incomplete, or inconsistent data that users enter in journals.

Know the Pattern

Scenario: A user has entered data on a journal line and proceeds to invoke a processing action on it, such as posting or exporting to electronic payments. Microsoft Dynamics NAV validates the data before it is committed. If any validation errors are found, the user must be informed of validation errors in the most optimal way.

One design is that when an error is found, stop execution and prompt the user to correct the error. After correcting the error, the user restarts processing and is stopped again at the next error, and so on. Stopping and showing each error is time-consuming and frustrating for the user.

Another design is that processing does not stop when an error is found. Instead, all errors are gathered in a table and displayed all at once at the end of processing. This way, the processing is ideally invoked only once, reducing the time and effort spent by the user to expose and correct all data validation errors.

In both designs, the processing is not finalized if any errors are found (for example, exporting to electronic payments is not done, until the data error is resolved).

This document describes how to implement the second error-handling design: Showing all errors at the end.

Use the Pattern

The example below comes from the implementation of SEPA Credit Transfer.

After setting up SEPA-specific configurations, the user can start entering vendor payments that will later be exported to the payment file. (The setup depends on the country, but generally involves choosing number series for SEPA export files, choosing the export format, and enabling SEPA Credit Transfer.)

In the W1 solution (and most of the country-/region-specific versions), payment lines are created in the Payment Journal page, from where the user can invoke the Export Payments to File action, which will attempt to create a SEPA-compliant XML file containing the description of the journal payments that are to be made by the bank.

When the Export Payments to File function is invoked, Microsoft Dynamics NAV validates the journal line data. If the data must be completed or updated, then no file will be created and the user sees the following message:

To give a visual overview, the lines that need corrections are highlighted in red. The factbox is context-sensitive, meaning that it shows only the errors that relate to the currently selected line.

When the first payment journal line is selected, the FactBox show errors for the first line.


When the second payment journal line is selected, the FactBox shows errors for the second line. 


Application Objects

In the following table, the Generic Object column contains the objects that you can use as a base for your implementation.

Generic Object

Description

Sample W1 implementation of SEPA Credit Transfer*

Journal Page

This is the journal list page where the user invokes the processing action.

Payment Journal

Action on Page

The processing action invoked by the user on the journal list page.

Export Payments to File

Errors Page List Part

A FactBox that displays any journal line validation errors.

 

To improve user experience, the developer can highlight the lines with errors in red and conveniently sort the lines with errors at the top.

Payment Journal Errors Part

Validation codeunit

Contains code that checks that the journal line contains correct, complete, and coherent data and that the line is ready for whatever process must be done next.

SEPA CT-Check Line

Processing codeunit

Executes the processing of the journal lines.

SEPA CT-Export File

Journal Error Text Table

Contains

  • The error messages
  • Link information about where the error messages belong. For example, in table 1228, Payment Jnl. Export Error Text, the error is linked uniquely to a journal line by the following fields:
    • Journal Template Name, with TableRelation="Gen. Journal Template"
    • Journal Batch Name, with TableRelation="Gen. Journal Batch".Name WHERE (Journal Template Name=FIELD(Journal Template Name))
    • Journal Line No.

Other related information can be added, such as document number of the original source document, if the current journal line originates from a document.

 

An extra improvement would be to add a drilldown or a link to the page where the user can fix the error. This would significantly simplify the scenario by excluding manual navigation and investigation by the user to find the page where the error can be fixed.

Payment Jnl. Export Error Text

 

 * The W1 implementation of file export for SEPA Credit Transfer contains the generic SEPA functionality. However, due to differences in data models and user scenarios in various country implementations, the selected local versions contain adaptations of the generic functionality.

Flow

Find below a diagram describing the flow between the objects involved in the journal error processing.

Code

Following the flow above, the code (in the SEPA Credit Transfer example) is as follows.

Read more on NAV Wiki...

Best regards,

The NAV Application Patterns team

NAV Application Design Pattern - Feature Localization for Data Structures

$
0
0

Meet the Pattern

This pattern shows a solution for integrating W1 features to pre-existing country features that use different tables to achieve similar functionality.

Know the Pattern

It sometimes happens that certain features are requested in a country/region that is supported by Microsoft, but they are not initially considered generic enough to be included in the W1 build. This is how local features, such as Subcontracting in Italy and India, were created or specific banking and payments functionality in Italy, France, Spain, and others.

Then, at some point in time, a decision is made to create a W1 feature that is closely related to the local functionality but uses a completely different set of tables, pages, etc. The developers now face the following problem: How to enable the newly-developed W1 feature into a country, such that the customers who are accustomed to their local structures can seamlessly continue working without completely (or immediately) switching to the W1 objects.

This was the issue that was tackled in the Microsoft Dynamics NAV 2013 R2, in relation to the SEPA Credit Transfers functionality.

Using a Proxy

The generic Proxy pattern is "a class functioning as an interface to something else" (Wikipedia).

 

Figure 1. Proxy in UML

Pattern Elements

The Microsoft Dynamics NAV data model translation of the proxy pattern can be used as explained below.

In the diagram, RealSubject is the Microsoft Dynamics NAV data model. Variations in table structures, relationships, and numbers are particular to each country. The W1 model is the base for the localized data models. However, some countries have heavy localizations which cannot be directly processed by the W1 core objects.

The proxy is a codeunit that gathers data from wherever it is stored and transforms it to fit into a standard table, which is later used across all localizations.

The interface is the fixed form in which the data is presented to be consumed by the client.

The client can be an XML port that is fed from the common data interface. It can also be any other data processor (a codeunit fed to another table, etc.) or data display object (page or report).

Pattern Steps

  1. The user creates records in the local tables.
  2. The user invokes an action that must be processed using the W1 feature code.
  3. The proxy codeunit moves the data from the local tables to the W1 tables, either into a temporary or persistent set of records, as needed.
  4. The W1 code now performs the action on the W1 table data.    

Use the Pattern

In Microsoft Dynamics NAV 2013 R2, Continue reading on NAV Patterns Wiki...

Best regards,

Bogdan Sturzoiu at Microsoft Development Center Copenhagen 

NAV Design Pattern - Journal Template-Batch-Line

$
0
0

This week, the pattern is familiar to most C/AL developers, but if you are new to Microsoft Dynamics NAV, or if you need a refresher, here is the pattern behind journal templates, batches, and lines.

Meet the Pattern

The role of a journal line is to temporarily hold transaction data until the transaction is posted. Before posting, the entries are in a draft state, which means that they are available for corrections and/or deletion. As soon as the entries are posted, they are converted to ledger entries.

Journal templates are used to specify the underlying journal structure and to provide the default information for the journal batches. Journal batches usually serve to group journal lines, such as lines created by two different users.

Know the Pattern

Journal templates and journal batches are used if there is a need to create and post one or more entries. They are implemented in multiple areas of the application, like Sales, Purchases, Cash Receipts, Payments, Fixed Assets1.

Journal Templates

The journal templates are located on the Journal Template page. A Journal Template definition contains a series of attributes, such as:

  • Name
  • Description
  • Type
  • Recurring
  • No. Series

The Journal Template table stores the relevant attributes that define the nature and behavior of the journal templates, for example:

Journal Template Table Field

Description

Test Report ID

The journals offer the possibility of running test reports3. The role of a test report is to simulate the posting process. The verification criteria for the journal lines is ran, and the report can be displayed, all without doing the actual posting. This helps finding and correcting any errors that might exist in the data.

The name of the test report is the same with the name of the corresponding journal, plus the suffix " - Test". For example, the General Journal has the associated test report named General Journal - Test.

Posting Report ID

This report is printed when a user selects Post and Print4.

Page ID

For some journals, more UI objects are required. For example, the General Journals have a special page for bank and cash.

Source Code

Here you can enter a Trail Code for all the postings done through this Journal4.

Recurring

Whenever you post lines from a recurring journal, new lines are automatically created with a posting date defined in the recurring date formula.

Each journal template defines a default value of those attributes. The values that are defined in a template will be inherited by the journal batches, which will be created from a journal template.

 

Microsoft Dynamics NAV is released with a number of standard journal templates predefined in the Journal Templates page. More templates can be defined by the users.

Journal Batches

Journal batches are created with the help of the journal templates.

A journal batch is typically used to make a distinction between collections of logically grouped journal lines. A typical design is to have a journal batch for each user who enters lines. The batches are used during the posting process, in order to post one or multiple lines at once.

Journal Lines

Journal lines contain the actual business data (posting dates, account numbers, amounts) that will be posted as ledger entries.

During posting, only the information from the journal lines is needed. However, the information has been created with the help of the journal templates and grouped together using the journal batches.

Posting creates ledger entries from the temporary content that is stored in the journal lines. Ledger entries are not created directly. Instead, they are posted from journal lines.

 

Aggregation

There is a 1:n aggregation relationship between journal templates and journal batches, as well as between journal batches and journal lines. Deleting a template will cascade deletion of the related batches and lines. Deleting a batch will cascade into deletion of related lines.

Read more on NAV Patterns Wiki...

Best regards,

Bogdana Botez at Microsoft Development Center Copenhagen


NAV Design Pattern - Instructions in the UI

$
0
0

Meet the Pattern

To mitigate usability problems with learnability or discoverability of Microsoft Dynamics NAV functionality, it is possible to embed instructions in the user interface (UI) in connection with the task that the user is performing. The goal is to explain how to use the product or feature without impairing the user’s productivity after user has learned how to use a feature.

Know the Pattern

Users must often go through a few days of training to learn how to use Microsoft Dynamics NAV, and even then, many users rely on super users to help them mitigate difficulties using Microsoft Dynamics NAV. In addition, because of low discoverability and learnability, many useful features are not being used at all.

Users’ expectations are changing. They expect the software to be usable out-of-the-box because this is the trend in software generally.

One of the cheapest and most effective methods to solve usability issues is to embed instructional messages in the product. From a user-experience point of view, this should be used as a last resort. UI should be self-explanatory, efficient, and simple to use. Accordingly, you should only implement this pattern if simplifying and improving a scenario is not possible or is too expensive.

In this connection, the most important requirement is not to impair productivity of the users. One of the biggest and most common UX mistakes that developers make is to “optimize for new users”. After the user has learned how to use the product, all the instruction texts and dialogs that we added to the UI will clutter the page and make information less visible. Instructional dialogs on routine tasks will become annoying. Therefore, we must make all instructions dismissible.

In the Mini App solution we have used following elements:

  1. Dismissible dialogs
  2. FastTabs with instructional text
  3. Help tiles on a Role Center
  4. Tooltips on actions and fields
  5. Task-oriented page Help

Use the Pattern

The following pattern applies to dismissible parts in the UI.

We have a table that stores the instructional code ID and the UserID, so that we can track which user has turned off which instruction. All the logic handling is done from a codeunit. It is the responsibility of the codeunit to show/hide dialogs if needed. 

Dismissible Dialogs

Dismissible dialogs show the instructional message about the functionality, with the user option to... read more on the NAV Design Patterns wiki.

Best regards,

Nikola Kukrika at Microsoft Development Center Copenhagen

Synchronize metadata, please…

$
0
0

One of new procedures/functions we have in NAV 2013 R2 is “metadata synchronization”. It is process when object (table) description done in C\SIDE by NAV developer is applied to SQL object (object structure in SQL becomes the same as we have in NAV object designer).

It is described at http://blogs.msdn.com/b/nav/archive/2014/03/27/table-synchronization-paradigm-in-microsoft-dynamics-nav-2013-r2.aspx

Unfortunately synchronization step is not mentioned in some place, for example:
- after you convert database to NAV 2013 R2, just after you have opened database with NAV 2013 R2 C\SIDE client and received message “database conversion was successful”  - run metadata synchronization;
- after you have created new database – run metadata synchronization
- in any case whatever you have done with objects – please run synchronization…

About synchronization process in details you can read at https://mbs.microsoft.com/files/partner/NAV/Support/HotTopics/SynchronizingSchemachangesNAV2013R2.docx 

But rough description could be: in NAV we have 3 “parts” of the same object: SQL object, Object description in Metadata snapshot, Object description in Object metadata.
When synchronization runs, NAV compares object description in metadata snapshot and object metadata and if differences found then NAV tries to apply object description from object metadata to SQL object to make it as we see it objects designer in C\SIDE. When this is done, NAV updates object description in metadata snapshot and we have all 3 parts identical. So there must be no situation when SQL objects is not the same as object description in metadata snapshot.
To find theses inconsistency we have released NAV 2013 R2 database consistency checker tool” which checks database metadata vs database structure and reports any inconsistency which could be fixed directly in SQL. Mentioned tool is released under KB 2963997 and could be downloaded from hotfix site.

Metadata synchronization process is running by NAV Service Tier (NST) and it starts when
- any client (RTC, Web client, Web service) connect to NST
- or executed NAV Administration PowerShell cmdlet “Sync-NAVTenant”
- or user imports objects to object designer and option “Prevent data loss from table changes” is set to “Yes”.

Usually synchronization is fast process: we run RTC, connect to NST, synchronization starts and finishes and RTC loads.

However when we do “big changes” (added fields to table and few keys…) or have big databases, synchronization runs hours. I have cases where synchronization runs >3 hours and here comes problem: process runs in background, users are not aware about it and whatever they tries to do with db, they receives different errors (about channel failure; SQL timeout, service not responsive and etc.). Even worse user can stop NST and with this kills synchronization and then SQL starts rollback for next few hours…

When NAV shows that synchronization finished successfully (cmdlet finished) or NAV client shows error, it could be that SQL still continue to synchronize metadata in background (or rollback ). Run sp_who2 in SQL management studio to see if there are running/active processes where ProgramName is “Microsoft Dynamics NAV Service”. If there is not “sleeping” process and it’s DiskIO increase continuously, please wait and do nothing with NAV (don’t modify any object, don’t compile, don’t import – better close development environment at all). At some moment processes becomes “sleeping” – this means synchronization finished (successful or failed you can see in Windows Event Viewer). Only after that you can continue your further actions.

Our development team is preparing solutions which make metadata synchronization more transparent and user friendly, so we expect easier life soon, but now: Synchronize metadata and track synchronization, please…  

  

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

GedasBusniauskas
Microsoft Lithuania
Microsoft Customer Service and Support (CSS) EMEA

Debug and Bulk Insert in Microsoft Dynamics NAV 2013 R2

$
0
0

We recently received some cases related to an unexpected behavior with the Microsoft Dynamics NAV Debugger when the Bulk Insert feature is enabled.

The Bulk Insert functionality is duly described in the MSDN Library at http://msdn.microsoft.com/en-us/library/dd355341(v=nav.70).aspx. Here you can see that Microsoft Dynamics NAV automatically buffers inserts in order to send them to Microsoft SQL Server at one time. In this way, performance is improved since the number of server calls is reduced.

So far, so good.

But when you enable debugging in a scenario that falls into a violation of a Primary Key in SQL Server due to several concurrent INSERT calls, you might find that the Microsoft Dynamics NAV Debugger will not stop in the exact INSERT statement but, due to delayed Bulk Insert, in a position right after the INSERT AL Statement sequence. 

For example, create a codeunit with the following code:

IF NOT CONFIRM(‘Please enable debugging, and then click OK’) THEN

  ERROR(‘Action canceled’);

GLEntry.INIT;

GLEntry.”Entry No.” = 111;

MESSAGE(‘Insert GL Entry record’);

GLEntry.INSERT;

MESSAGE(‘Try to insert duplicate GL Entry record’);

GLEntry.INSERT;

MESSAGE(‘Do something. Such as FINDLAST’);

GLEntry.FINDLAST;

ERROR(‘End of scenario’);

Run the codeunit, and then, when prompted, enable debugging against the current session.

Normally, you would expect to have debugger stop exactly in the second INSERT statement due to the violation of Primary Key (duplicate) but in this scenario, the debugger stops in the FINDLAST statement. This behavior is due to Bulk Insert feature that would delay the INSERT and therefore the error message will be catch only after the last INSERT statement.

Workaround

Microsoft Dynamics NAV 2013 R2 introduced a configuration parameter in the CustomSettings.config file for the Microsoft Dynamics NAV Server service:

  <!--

    Specifies whether to enable the SQL Buffered Insert functionality to buffer rows that are being inserted into a database table.

    When this parameter is enabled, up to 5 rows will be buffered in the table queue before they are inserted into the table. 

    To optimize performance in a production environment, you should set this parameter to TRUE (enabled). In a test environment,

    you can set this parameter to FALSE (disabled) to debug SQL insert failures.

  -->

  <add key="BufferedInsertEnabled" value="True" />

With a brief explanation on how to turn on and off this feature. If you simply change this parameter from True to False (in the Microsoft Dynamics NAV Server Administration tool, on the General tab, clear the Enable Buffered Insert field), restart the service, and then run the same codeunit as described above. Now the debugger stops exactly in the violation of Primary Key in the duplicate attempt (INSERT).

The aforementioned behavior also reproduces with Microsoft Dynamics NAV 2009 R2 and Microsoft Dynamics NAV 2013. With these version, you cannot trigger on and off the Bulk Insert feature, but you can apply a workaround if you would like to fall back to the classic INSERT instead of the automatic enablement of the Bulk Insert feature.

The workaround is pretty simple: temporarily violate one of the following constraint for Bulk Insert that would fall back to classic INSERT into SQL Server. The easiest and most feasible one is to simply add a BLOB field in the Table structure (just add the field, you do not need to populate this). Save and Compile the table and magically you will not have the Bulk Insert effect when you need to debug. Of course, we do recommend to perform this action in a staging or test environment.

 

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

 

Duilio Tacconi                                      Microsoft Dynamics Italy         

Microsoft Customer Service and Support (CSS) EMEA

Special thanks to Nacho Galajares, Antonio Cerrolaza from tipsa.net and Tobias Fenster, Stefan Konrad from infoma.de 

Windows Regional Settings and Number Formatting in RDLC Reports

$
0
0

You probably know this blog post from Robert Miller: http://blogs.msdn.com/b/nav/archive/2014/03/25/formatted-decimal-values-dropping-symbols-in-rdlc.aspx. It talks about changing/forcing number formats when Windows is not able to do so. For Switzerland (de-CH), number formats in Windows are not correct regarding the thousands separator. It should be formatted “1’234,56” but is formatted like “1 234,56” (a space instead of an apostrophe). 

Current status

Due to the nature of RDL and the option to specify a language name for each and every single TextBox Control, Report Viewer does not pick specific user changes for decimal or date formats, even if the requested regional setting is equal to the current user's regional setting. Report Viewer retrieves the default culture settings for a language name, but does not take into account, that this setting (when selected as current/default user format) may have changed the number format.

 

 

So ReportViewer never knows about the current regional settings and the changed symbols when looking up and using the default settings. So for our friends from Switzerland, there is currently only one known option: manually change the thousands separator and re-format the RDLC decimal values. The drawback of this is that every decimal expression has to be changed in all affected reports.

Talking about the world and everything

Due to a talk between buildings K and L in Munich a while ago, we came across localization topics, globalization and ways to force specific locales (and number symbols) for a Windows Forms application. Unfortunately, there is no way, because globalization changes are only supported for web.config, not app.config (see http://msdn.microsoft.com/en-us/library/vstudio/bz9tc508(v=vs.100).aspx). Not talking about different threads and (from my knowledge) no inheritance of culture information to the ReportViewer thread.

But this discussion caused me to start searching the web and I came across custom locales (http://msdn.microsoft.com/en-us/library/windows/desktop/dd317785(v=vs.85).aspx). There is also a tool to create custom locales named Locale Builder (Download at http://www.microsoft.com/en-us/download/details.aspx?id=41158). 

The end is near

With the Locale Builder tool, you can create custom locales with changed number symbols, build an MSI file of it, and the *REPLACE* an existing locale in Windows by assigning the same name to the new locale:

 

The new locale is installed using the MSI file, replaces the default locale (in this case for de-CH), and is marked with an asterisk as a custom locale: 

Now, by default, the correct grouping symbol is used and shown when selecting de-CH, even without changing it manually: 

With this in mind, what do you think is the result when a Report is executed on this system without any change? See fields VK-Preis (Sales Price) or Betrag (Amount). 

 

BANG – Great!

 

Serving some white wine with the fish

However, when printing reports for foreign countries, it always uses the new default format now. Better than before, but not like a world champion would do…

So, in Object Designer, open table 8 Language in design mode and add the following function: 

GetLanguageName(LanguageCode : Code[10]) : Text[10]
CultureInfo := CultureInfo.GetCultureInfo(GetLanguageID(LanguageCode));

EXIT(CultureInfo.Name);

Name         

DataType

Subtype

CultureInfo

DotNet

System.Globalization.CultureInfo.'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'

Save table 8.

Open report 206 in design mode and add a new column to the DataItem Sales Invoice Header

Language.GetLanguageName("Language Code")             Name: "LanguageName"

Now open the report layout for report 206 and bulk-select all TextBox controls in each Tablix. When selected, change the property Language for all controls to =Fields!LanguageName.Value.

Save everything and run report 206 for to show all posted invoices in the database. For all different languages, you should see correct decimal symbols.

Love it? 

 

Carsten Scholling

Microsoft Dynamics Germany
Microsoft Global Business Support (GBS) EMEA

Microsoft Connect: http://connect.microsoft.com
Online Support: http://www.microsoft.com/support

Extensibility for the Microsoft Dynamics NAV Tablet Client

$
0
0

Get in touch

With Microsoft Dynamics NAV 2015, you will be able to run your Microsoft Dynamics NAV application on tablets. The touch interface on these devices opens for a few new cool scenarios. One of the obvious usage of touch is to allow users to write directly on the tablet, for example to sign documents.

In this blog post, I will walk you through how to develop a client control add-in with JavaScript that you will be able to add to any Microsoft Dynamics NAV page. This add-in shows a box in which the user can write with a tablet pen or just with his finger. It also demonstrates how to save the image into a Microsoft Dynamics NAV table as a BLOB.

If you are not familiar with JavaScript client add-ins or if you just need a refresher, take a look at this walkthrough for your classic ‘Hello World’ example.

I am referring to this add-in as the ‘Signature Add-in’ and to the graphical data as ‘the signature’, but it could really be any type of hand-drawn graphics.

So, let’s get started.

Creating the C# class library

In Visual Studio, create a new C# class library project and add a reference to the Microsoft.Dynamics.Framework.UI.Extensibility.dll assembly. You will find this assembly in a directory similar to C:\Program Files (x86)\Microsoft Dynamics NAV\80\RoleTailored Client.

If you are already familiar with Microsoft Dynamics NAV HTML/JavaScript add-ins, you know that the purpose of this class library is merely to specify the interface and make the C/AL compiler happy. It does not contain any actual executing code.

On the server side, besides the usual AddInReady event, we will need two more events; one to write the signature data: the SaveSignature and one to read the signature from the Microsoft Dynamics NAV table to trigger an update on the page; the UpdateSignature.

On the client side, that is in the JavaScript code, we also need a method to actually draw the graphics and we also want to be able to clear the content.

To specify this API, create a single public interface looking like this:

 

namespace SignatureAddIn

{

    using Microsoft.Dynamics.Framework.UI.Extensibility;

 

    /// <summary>

    /// Interface definition for the signature add-in.

    /// </summary>

    [ControlAddInExport("SignatureControl")]

    public interface ISignatureAddIn

    {

        [ApplicationVisible]

        event ApplicationEventHandler AddInReady;

 

        [ApplicationVisible]

        event ApplicationEventHandler UpdateSignature;

       

        [ApplicationVisible]

        event SaveSignatureEventHandler SaveSignature;

 

        [ApplicationVisible]

        void ClearSignature();

 

        [ApplicationVisible]

        void PutSignature(string signatureData);

    }

 

    public delegate void SaveSignatureEventHandler(string signatureData);

}

Notice that the SaveSignatureEventHandler delegate takes a string parameter, which will contain the actual serialized data representing the image.

Build your assembly to make sure you did not forget a semi-colon somewhere.

Next, you will need to sign your assembly, obtain its public key token and copy it to the client add-ins folder. To do that, follow the steps as described in the walkthrough.

 

Creating the manifest file

In the manifest of an add-in, which is just regular XML file, we specify the resources that the control will use. The client side code consists of one single JavaScript file signature.js and use a single CSS file to style the HTML. We will also add a call to an initialization method in our script. The manifest is a good place to do that as the framework ensures that it gets called only when the browser is ready.

That makes our manifest look like this:

<?xml version="1.0" encoding="utf-8" ?>

<Manifest>

  <Resources>

    <Script>signature.js</Script>

    <StyleSheet>signature.css</StyleSheet>

  </Resources>

  <ScriptUrls>

  </ScriptUrls>

  <Script>

      <![CDATA[

          init();

      ]]>

  </Script>

 

  <RequestedHeight>200</RequestedHeight>

  <RequestedWidth>700</RequestedWidth>

  <VerticalStretch>false</VerticalStretch>

  <HorizontalStretch>false</HorizontalStretch>

</Manifest>

 

Creating the CSS file

No big deal here, just create a file named signature.css (the name needs to match the one in the manifest) with the following content:

 

.signatureArea {

    width: 300px;

}

 

.signatureCanvas {

    border: solid;

    border-width: 1px;

    border-color: #777777;  

    background-color: #fff;

    width: 100%;

}

 

.signatureButton {

  width: 100px;

  height: 40px;

  color: white;

  background-color: #666666;

  font-size: 12pt;

  outline: 0;

  border-color: white;

}

Feel free to play with the styles, this will only affect your add-in and will not affect the Microsoft Dynamics NAV pages whatsoever.

The interesting part

All of what has been described so far is boilerplate stuff, which you will have to do for any Microsoft Dynamics NAV HTML client add-in. We are now getting to the interesting piece, which is the JavaScript code.

Create a file named signature.js. Again here, the name has to match the one you declared in the manifest.

Let’s start with the implementation of the interface contract that we previously defined in the C# class library:

var signature;

 

function init() {

 

    signature = new ns.SignatureControl();

    signature.init();

    RaiseAddInReady();

}

 

 

// Event will be fired when the control add-in is ready for communication through its API.

function RaiseAddInReady() {

    Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('AddInReady');

}

 

// Event raised when the update signature has been called.

function RaiseUpdateSignature() {

    Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('UpdateSignature');

}

 

// Event raised when the save signature has been called.

function RaiseSaveSignature(signatureData) {

    Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('SaveSignature', [signatureData]);

}

 

 

function PutSignature(signatureData) {

    signature.updateSignature(signatureData);

}

 

function ClearSignature() {

    signature.clearSignature();

}

 

As you can see the SignatureControl object in the ns namespace is doing all the work, so let’s take a closer look at it.

(function (ns) {

 

    ns.SignatureControl = function () {

        var canvas,

            ctx;

 

        function init() {

            createControlElements();

            wireButtonEvents();

            wireTouchEvents();

            ctx = canvas.getContext("2d");

        }

 

     …

Here we declare the SignatureControl class in the ns namespace and the init()method. The createControlElements() creates the various HTML elements that the control is made of.

       function createControlElements() {

            var signatureArea = document.createElement("div"),

                canvasDiv = document.createElement("div"),

                buttonsContainer = document.createElement("div"),

                buttonClear = document.createElement("button"),

                buttonAccept = document.createElement("button"),

                buttonDraw = document.createElement("button");

 

            canvas = document.createElement("canvas"),

            canvas.id = "signatureCanvas";

            canvas.clientWidth = "100%";

            canvas.clientHeight = "100%";

            canvas.className = "signatureCanvas";

 

            buttonClear.id = "btnClear";

            buttonClear.textContent = "Clear";

            buttonClear.className = "signatureButton";

 

            buttonAccept.id = "btnAccept";

            buttonAccept.textContent = "Accept";

            buttonAccept.className = "signatureButton";

 

            buttonDraw.id = "btnDraw";

            buttonDraw.textContent = "Draw";

            buttonDraw.className = "signatureButton";

 

            canvasDiv.appendChild(canvas);

            buttonsContainer.appendChild(buttonDraw);

            buttonsContainer.appendChild(buttonAccept);

            buttonsContainer.appendChild(buttonClear);

 

            signatureArea.className = "signatureArea";

            signatureArea.appendChild(canvasDiv);

            signatureArea.appendChild(buttonsContainer);

 

            document.getElementById("controlAddIn").appendChild(signatureArea);

        }

Besides plain old divs and buttons, the canvas is where we will actually be able to draw. Canvas has been supported in most browsers for a while and you can read more about it here.

The control has three buttons. One to accept the signature, which will save it to the database, one to clear the field and one to redraw the signature from the database, mostly for test purposes, as you would probably not need it in most real-life scenarios. Let’s wire these buttons so do something useful:

function wireButtonEvents() {

    var btnClear = document.getElementById("btnClear"),

        btnAccept = document.getElementById("btnAccept"),

        btnDraw = document.getElementById("btnDraw");

 

    btnClear.addEventListener("click", function () {

        ctx.clearRect(0, 0, canvas.width, canvas.height);

    }, false);

 

    btnAccept.addEventListener("click", function () {

        var signatureImage = getSignatureImage();

        ctx.clearRect(0, 0, canvas.width, canvas.height);

        RaiseSaveSignature(signatureImage);

    }, false);

 

    btnDraw.addEventListener("click", function () {

        RaiseUpdateSignature();

    }, false);

}

Notice that we use the drawing context ctx, that we obtained during initialization to clear the content of the canvas. We will see what the getSignatureImage() exactly does to obtain the data in a sec but before that let’s wire the touch events.

The touch events

In order to be able draw, we want to react to touch events. In this example, we also hook up mouse events, which is convenient if you want to test your add-in on a non-touch device with an old-fashioned mouse.

function wireTouchEvents() {

    canvas.addEventListener("mousedown", pointerDown, false);

    canvas.addEventListener("touchstart", pointerDown, false);

    canvas.addEventListener("mouseup", pointerUp, false);

    canvas.addEventListener("touchend", pointerUp, false);

}

As you can see, touchstart is the equivalent of a mousedown, while a touchend is the counterpart of a mouseup.

Once we have detected a touchstart, the trick is to start listening to touchmove and draw in the canvas to the current position of the ‘touching’. Once we get a touchend, we will then stop the listening and the drawing:

function pointerDown(evt) {

    ctx.beginPath();

    ctx.moveTo(evt.offsetX, evt.offsetY);

    canvas.addEventListener("mousemove", paint, false);

    canvas.addEventListener("touchmove", paint, false);

}

 

function pointerUp(evt) {

    canvas.removeEventListener("mousemove", paint);

    canvas.removeEventListener("touchmove", paint);

    paint(evt);

}

 

function paint(evt) {

    ctx.lineTo(evt.offsetX, evt.offsetY);

    ctx.stroke();

}

Canvas image data

We want to be able to serialize and de-serialize the image data from the canvas, so we can send it back and forth to the server in a string. The HTML canvas has built-in functionalities to do that through the context:

function updateSignature(signatureData) {

    var img = new Image();

    img.src = signatureData;

    ctx.clearRect(0, 0, canvas.width, canvas.height);

    ctx.drawImage(img, 0, 0);

}

 

function getSignatureImage() {

    return canvas.toDataURL();

}

 

function clearSignature() {

    ctx.clearRect(0, 0, canvas.width, canvas.height);

}

 

return {

    init: init,

    updateSignature : updateSignature,

    getSignatureImage: getSignatureImage,

    clearSignature: clearSignature

};

 

The toDataURL() method converts the image into a (rather long) URL encoded string containing all the pixels. To convert it back, we only need to create an image and set its src property to this URL encoded string and pass this image to the method drawImage on the canvas context. This is pretty convenient as it allows us to use a simple string rather than more complex data structure such as arrays.

We are now done with the JavaScript part and the entire file looks like this:

var signature;

 

function init() {

    signature = new ns.SignatureControl();

    signature.init();

    RaiseAddInReady();

}

 

// Event will be fired when the control add-in is ready for communication through its API.

function RaiseAddInReady() {

    Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('AddInReady');

}

 

// Event raised when the update signature has been called.

function RaiseUpdateSignature() {

    Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('UpdateSignature');

}

 

// Event raised when the save signature has been called.

function RaiseSaveSignature(signatureData) {

    Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('SaveSignature', [signatureData]);

}

 

 

function PutSignature(signatureData) {

    signature.updateSignature(signatureData);

}

 

function ClearSignature() {

    signature.clearSignature();

}

 

(function (ns) {

 

    ns.SignatureControl = function () {

        var canvas,

            ctx;

 

        function init() {

            createControlElements();

            wireButtonEvents();

            wireTouchEvents();

            ctx = canvas.getContext("2d");

        }

 

        function createControlElements() {

            var signatureArea = document.createElement("div"),

                canvasDiv = document.createElement("div"),

                buttonsContainer = document.createElement("div"),

                buttonClear = document.createElement("button"),

                buttonAccept = document.createElement("button"),

                buttonDraw = document.createElement("button");

 

            canvas = document.createElement("canvas"),

            canvas.id = "signatureCanvas";

            canvas.clientWidth = "100%";

            canvas.clientHeight = "100%";

            canvas.className = "signatureCanvas";

 

            buttonClear.id = "btnClear";

            buttonClear.textContent = "Clear";

            buttonClear.className = "signatureButton";

 

            buttonAccept.id = "btnAccept";

            buttonAccept.textContent = "Accept";

            buttonAccept.className = "signatureButton";

 

            buttonDraw.id = "btnDraw";

            buttonDraw.textContent = "Draw";

            buttonDraw.className = "signatureButton";

 

            canvasDiv.appendChild(canvas);

            buttonsContainer.appendChild(buttonDraw);

            buttonsContainer.appendChild(buttonAccept);

            buttonsContainer.appendChild(buttonClear);

 

            signatureArea.className = "signatureArea";

            signatureArea.appendChild(canvasDiv);

            signatureArea.appendChild(buttonsContainer);

 

            document.getElementById("controlAddIn").appendChild(signatureArea);

        }

 

        function wireTouchEvents() {

            canvas.addEventListener("mousedown", pointerDown, false);

            canvas.addEventListener("touchstart", pointerDown, false);

            canvas.addEventListener("mouseup", pointerUp, false);

            canvas.addEventListener("touchend", pointerUp, false);

        }

 

 

        function pointerDown(evt) {

            ctx.beginPath();

            ctx.moveTo(evt.offsetX, evt.offsetY);

            canvas.addEventListener("mousemove", paint, false);

            canvas.addEventListener("touchmove", paint, false);

        }

 

        function pointerUp(evt) {

            canvas.removeEventListener("mousemove", paint);

            canvas.removeEventListener("touchmove", paint);

            paint(evt);

        }

 

        function paint(evt) {

            ctx.lineTo(evt.offsetX, evt.offsetY);

            ctx.stroke();

        }

 

        function wireButtonEvents() {

            var btnClear = document.getElementById("btnClear"),

                btnAccept = document.getElementById("btnAccept"),

                btnDraw = document.getElementById("btnDraw");

 

            btnClear.addEventListener("click", function () {

                ctx.clearRect(0, 0, canvas.width, canvas.height);

            }, false);

 

            btnAccept.addEventListener("click", function () {

                var signatureImage = getSignatureImage();

                ctx.clearRect(0, 0, canvas.width, canvas.height);

                RaiseSaveSignature(signatureImage);

            }, false);

 

            btnDraw.addEventListener("click", function () {

                RaiseUpdateSignature();

            }, false);

        }

 

        function updateSignature(signatureData) {

            var img = new Image();

            img.src = signatureData;

            ctx.clearRect(0, 0, canvas.width, canvas.height);

            ctx.drawImage(img, 0, 0);

        }

 

        function getSignatureImage() {

            return canvas.toDataURL();

        }

 

        function clearSignature() {

            ctx.clearRect(0, 0, canvas.width, canvas.height);

        }

 

        return {

            init: init,

            updateSignature : updateSignature,

            getSignatureImage: getSignatureImage,

            clearSignature: clearSignature

        };

    };

})(this.ns = this.ns || {});

Packaging your add-in

Now that we have all the parts of the component, we need to zip it together and import it in Microsoft Dynamics NAV. This is again as you would do for any other add-in.

Create a zip file with the following structure:

 

Put the manifest at the root, the JavaScript file in the script folder and the CSS file in the Stylesheet folder.

Open any of the Microsoft Dynamics NAV clients (Windows, Web or Tablet) and go to the Control Add-ins page. Create a new entry named SignatureControl and enter the public key token that you saved earlier. Import the zip file.

 

The C/SIDE side of things

Now that our add-in is sitting comfortably within the confines of the Microsoft Dynamics NAV database, we need to add it to page. But before that, we want a place to save the signature image data. In this fabricated example, I will add the signature to the Sales Invoice card page from the Mini app (1304) which is based on the Sales Header table.

  1. In Object Designer, open the Sales Header table and add BLOB field called ‘SignatureImage’.
  2. Add the actual control page by opening page 1304 and add the control into a separate group.

  3.  

By now you should be able to fire up this page and see how our control looks like. To do that open the client of your choice in the mini app. Navigate to the Sales Invoices and open the Sales Invoice card page.

You should see the signature control. Try to draw in with the mouse or with your finger if you are on a touch enabled device.

Even the clear button works already and allows you to delete your doodles.

The last part that we are missing is to save and retrieve the pixels to the Microsoft Dynamics NAV database. To do that we need to write a bit of C/AL code.

The C/AL code

If you recall how we defined the add-in interface, we have three triggers to take care of: AddInReady, UpdateSignature and SignatureSaved.

Nothing surprising here. The really interesting methods are SaveSignature and GetDataUriFromImage.

This is where the conversion from between the URL encoded image string and a Microsoft Dynamics NAV BLOB occurs.

The most convenient way to do this is to use the power of .NET for regular expressions matching and memory streams.

So, let’s create a SaveSignature method and add the following .NET type variables to the locals:

The URL encoded representation of the image contains some goo around the actual pixel information. With .NET regular expressions, we strip the header by matching it and preserving the rest.

What is left is a base 64 encoded string, which we can convert to a byte array using the .net Convert utility class. We then pass it to the memory stream and save it to the Microsoft Dynamics NAV table as a BLOB.

Obtaining the encoded URI is obviously the reverse operation. This is somewhat simpler; after reading the BLOB, we just need to re-add the header.

Finally, we want to update the drawing, when we navigate the records:

That’s it!

Now you should be able to save the graphics and when you close and re-open the page or navigate through the Sales Invoices, the picture gets updated accordingly.

Even though the most obvious usage scenarios are on the tablet, this add-in works on all three clients (Windows, Web and Tablet). 

NOTE: To copy the code samples, see Extensibility for the Microsoft Dynamics NAV Tablet Client on MSDN.

Viewing all 773 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>