Skip Navigation LinksTech Blogs

  • Fixing Orphaned TFS Branches

    by Marc Moore | Jul 15, 2013

    Most of our team's projects are organized in a proper branching hierarchy like the one shown below:

     Proper Hierarchy Image

    But what if that isn't the case? Consider the problem faced by the Time and Effort team when the DEV branch was parentless; i.e., it was floating about loose, with no defined connection to the TRUNK or other branches.

    Correcting the T&E problem began with a "baseless merge" between the DEV and TRUNK branches, in order to establish a merge path between the 2 branches:

    tf merge /baseless <ParentBranch> <ChildBranch>

    In our case:

    Baseless Merge Image

    This command will create a pending change in the child branch.  Check this in like any other changeset. 

    At this point the relationship between the two branches should be established.  To see, view the hierarchy of the parent branch like this:

    View Hierarchy Image

    If you do not see the expected relationship, you may have to define it yourself.  To do this, select the child branch in Source Control Explorer, then use the Reparent action:

     Reparent Action Image

    (Note that the Reparent command is not available on the right-click context menu - you must do it as shown, at least in VS 2010.)

    The reparenting window will appear.  Choose the correct parent for your branch - or choose "No Parent" if you are working with a TRUNK branch:

    Reparenting Choices Image

    After selecting the parent branch, click Reparent to make the relationship happen.

  • Test-Driven Development in Natural

    by Marc Moore | May 22, 2013

    Is TDD/automated unit testing possible in Natural? On one hand, Natural is a programming language like any other. Writing truly modular code is indeed possible. On the other, doing so is far from frictionless.

    Basic TDD requires that external dependencies - databases, screens, etc. - be isolated from the code under test.  That means that testable Natural code cannot be part of a program/sub-program that contains database access code or screen-related code.  Nor are sub-routines discrete, testable units because of their inherent reliance on global variables.  Practically speaking, Natural sub-programs are the only candidates for TDD and only those sub-programs that have no environmental dependencies.

    For an example of a TDD-ready Natural sub-program, consider HNSPMASK, a utility program that masks sensitive data fields.  HNSPMASK has no database or UI dependencies and accepts all of the information it needs to fulfill its purpose through its parameter area:

    0010 ************************************************************************
    0020 * PROGRAM NAME      : HNSPMASK
    0140 ************************************************************************
    0150 *
    0160 DEFINE DATA
    0170 PARAMETER USING HNDPMASK
    0180 *

    HNDPMASK is presents a simple interface, but it's easy to imagine a sub-program whose interface is much more complex:

    0010 DEFINE DATA PARAMETER
    0020  1 #HNDPMASK-PARAMETERS (A256)
    0030  1 REDEFINE #HNDPMASK-PARAMETERS
    0040    2 #HNDPMASK-NBR-UNMASKED (I2)
    0050    2 #HNDPMASK-DATA-STR (A253)
    0060    2 #HNDPMASK-DIRECTION (A1) /*R or L
    0070 END-DEFINE 

    Calling HNSPMASK is also simple:

    2820 MOVE 4 TO #HNDPMASK-NBR-UNMASKED
    2830 MOVE 'L' TO #HNDPMASK-DIRECTION
    2890 MOVE #MY-DATA TO #HNDPMASK-DATA-STR
    2900 CALLNAT 'HNSPMASK' #HNDPMASK-PARAMETERS
    2910 MOVE #HNDPMASK-DATA-STR TO #MY-DATA

    Nothing unusual so far, right? Indeed, there is nothing unusual about HNSPMASK at all, other than the fact that a test driver named HBTDMASK was written to prove its functionality (abbreviated below):

    0010 ***********************************************************************
    0020 * PROGRAM      : HBTDMASK
    0060 ***********************************************************************
    0080 DEFINE DATA
    0100 LOCAL USING HNDPMASK
    0120 LOCAL
    0200 1 #TEST-4-CHAR-INPUT                  (A253) CONST<'ABCD'>
    0220 1 #TEST-L2R-4-CHAR-MASK-2-EXPECTED    (A253) CONST<'**CD'>
    0320 END-DEFINE
    0330 *
    0380 MOVE 'L' TO #HNDPMASK-DIRECTION
    1130 *
    1140 * 4-char string should return same string
    1150 * with a 2-char mask size
    1160 *
    1170 MOVE 2 TO #HNDPMASK-NBR-UNMASKED
    1180 MOVE #TEST-4-CHAR-INPUT TO #HNDPMASK-DATA-STR
    1190 CALLNAT 'HNSPMASK' #HNDPMASK-PARAMETERS
    1200 IF #HNDPMASK-DATA-STR EQ #TEST-L2R-4-CHAR-MASK-2-EXPECTED
    1210   PRINT '4-char, mask 2 test passed'
    1220 ELSE
    1230   PRINT '4-char, mask 2 string test failed:'
    1240     ' Expected "' #TEST-L2R-4-CHAR-MASK-2-EXPECTED
    1250     '"; received "' #HNDPMASK-DATA-STR '"'
    1260 END-IF
    1270 *

    Many additional scenarios were devised in HBTDMASK, but the redacted code above illustrates the point.  We know from the tests that HNSPMASK functions as specified and, if it is modified, the tests will demonstrate that it still works or show that the change has broken existing functionality.

    As written, HBTDMASK could be executed in the Test environment via via JCL and results forwarded to the development team as needed.

    Another option would be to create a project-wide test driver program - call it HBTDD, for example - and collect all of an application's Natural-based TDD calls into a single program.

    A third option would be to move the tests into a Natural procedure called via Shadow.  In this scenario, the tests could be integrated into the normal C# unit tests, at the cost of binding these tests to a Shadow ODBC connection.

    This is a simple example, but imagine a complex payroll or benefit calculation sub-program that was entirely free of external dependencies.  All calculations could be performed inside a highly modular, highly testable sub-program.  A suite of unit tests could then be created to prove the calculations and these tests would then provide regression tests against all future changes to the calculations.  The result is that fear of making changes to the calculation is eliminated, making the team more agile and responsive.

  • Creating Psuedo-Tables in Adabas

    by Marc Moore | Apr 16, 2013

    Many of our older applications use Adabas as the backend database. Those applications often use BPP-TABLES to store their lookup/reference data in pseudo-tables.

    BPP-TABLES layout with the important fields/super-descriptors highlighted:

    BPP-TABLES Layout

    Creating a table is a simple process.  First, log into the mainframe, then go to the BPP Test environment.  From the Command prompt, enter M to launch the BPP Test UI.

    Launch BPP Test UI

    Next, enter 451 in the Screen field and press Enter to get to the GENERIC TABLES screen.

    To add a new table, enter an A in Function column of the first row, then enter the following data values:

    • Key field – your pseudo-table name
    • Maint. on 451 field – Y 
    • Data field – your table description

    For example:

    Add a Table to BPP-TABLES

    Hit Enter to create your table.  You should receive a message like “0202 Record has been successfully added” from BPP.

    Next, enter the first few characters of the new pseudo-table’s key into the Start From field and press Enter:

    Search BPP-TABLES

    Next, enter an X into the Function column of your pseudo-table’s row and press Enter:

    Add Data to BPP-TABLES

    To enter your lookup records, enter an A into the Function column of each row you want to add along with the Key and Data values for the row(s):

    Add Rows to BPP-TABLES

    Press Enter to save your data into the database.  You should receive another “0202 Record has been successfully added” message from BPP.

  • Creating Lookup Data From a C# Class

    by Marc Moore | Apr 08, 2013

    Hard-coding constant values in C# classes is a bad practice that we rarely indulge in here at TAMUS ESI.  You will not see very many instances of code like:

        if (someObject.StatusCode == "A")

     We prefer something more like:

        if (someObject.StatusCode == SomeProjectConstants.SomeFieldStatus.Active)

    where SomeFieldStatus is a class like this one in HRConnect 2:

        public static class BppLogFieldCodes         
    {            
    public const string FIT = "FIT";
    public const string ACH = "ACH";
    ...
    }

     

    Nothing fancy about this, but creating classes/constants like these takes only moments and helps reduce bugs and makes changing/adding new codes easier in future releases.

    What isn't so helpful about this class - or any other basic C# class - is that it doesn't provide any metadata about the code values. For example, it would be very convenient if the "friendly description" for the FIT code were directly associated with the code property in the BppLogFieldCodes class.

    The usual technique for getting code/description translation data into an ESI web application is to use a "Lookups" class that knows how to connect to a data source, read a database table (or tables), and create a list of key/value pairs.  The Lookups class will also normally provide caching of the resulting queries to improve performance.  The Lookup data is then used to populate dropdowns, radio button lists, etc.  Again, nothing fancy.

    Such Lookup classes are all well and good, but having cached lookup data does nothing to eliminate the BppLogFieldsCode class and others like it - the constants are still needed for comparisons, etc., as initially demonstrated.

    But what if we could create a class like this one that includes the lookup metadata in the class definition?

        public static class BppLogFieldCodes         
    {            
    [Description("Federal Income Tax")]            
    public const string FIT = "FIT";

    [Description("Direct Deposit")]            
    public const string ACH = "ACH";
    ...
    }

    Thanks to the new AttributeInterrogator class in the So.Esi.Core assembly, it is now possible to annotate your class as shown, then write a simple, one-line method to get a list of lookup data for use in your app's UI.  For example:

        public IList<LookupDataDTO> GetBppLogFieldSets()         
    {            
    return MapAttributeListToLookupData(                
    AttributeInterrogator.GetTypeAttributes<DescriptionAttribute>(
    typeof(HRConnectWebConstants.BppLogFieldCodes)));        
    }

     

    The GetBppLogFieldSets method can be used in exactly the same way as methods in the "Lookups" data wrapper class are used now, but no database connection/access is required.

    So how is this accomplished? 

    The So.Esi.Core assembly contains a DescriptionAtttribute class that inherits from System.Attribute and accepts a description as a parameter to its constructor.  That allows the [Description] attribute to be applied to a class and its properties. 

    The Description attribute can also be applied to a C# enum and its fields.  To make enums even more descriptive, a CodeDescription attribute was created to handle the "three-way" mapping that often takes place with an enumeration. 

    In the three-way enum scenario, the three values in play are:

    1. The enum value itself
    2. The lookup code
    3. The friendly description

    For example, consider the following enum modeled on TrainTraq 3:

        [Description("Role")]         
    public enum CodedRatingEnum
    {
    [CodeDescription("APP-EMPLOYEE", "Employee")]
    Employee = 1,

    [CodeDescription("APP-DEPT-ADM", "Department Admin")]
    Good = 2,
    ...
            }

     

    The AttributeInterrogator class helps make this metadata convenient to consume.  It also allows developers to create their own custom metadata attribute classes and apply them in their applications.

    A quick look at the current implementation of the GetPropertyAttributes method, which is used to scan a class for its metadata, demonstrates how it works beneath the covers:

        public static IList<AttributeInformation<T>> GetPropertyAttributes<T>(object source) 
    where T : System.Attribute
    {
    IList<AttributeInformation<T>> result = null;

    Type sourceType = source.GetType();
    MemberInfo[] members = sourceType.GetMembers();
    foreach (MemberInfo member in members)            
    {       
              if (member.MemberType == MemberTypes.Property)
    {
                    object[] attrs = member.GetCustomAttributes(typeof(T), false);
                    if (attrs != null && attrs.Length > 0)
                    {
    if (result == null)
                            result = new List<AttributeInformation<T>>();

    object value = sourceType.GetProperty(member.Name).GetValue(source, null);
                        result.Add(new AttributeInformation<T>
    {
    Name = member.Name,
    Value = value,
    Attributes = attrs[0] as T
    });
                    }
    }
            }            
    return result;
    }

     

    AttributeInformation is a simple class in So.Esi.Core:

        public class AttributeInformation<T> 
    where T : class
    {
    public string Name { get; set; }
            public object Value { get; set; }
            public T Attributes { get; set; }
    }

     

    It is notable only for the Attributes property, which is generic.  This is what allows application developers to define their own custom attribute classes for use with the AttributeInterrogator.

    The AttributeInterrogator.GetPropertyAttributes method uses reflection to iterate over the specified class' properties.  It then interrogates over each property to see if a custom attribute of the desired type T is defined.  If so, the property name, value, and attribute information is captured in the result list and returned. 

    Similar approaches are taken for interrogating class types and enums.

  • Importing Adabas Data into SQL Server

    by Marc Moore | Mar 26, 2013

    As SQL developers we occasionally need quick access to data that is stored in Adabas. Formal SSIS processes work fine for production use, but in development mode, Adabas data isn't always easy to get because of issues connecting to the mainframe.  Recently I began to use SQL Server's Import and Export for this purpose and it worked well, once I worked the kinks out of the connection process.

    Choose a Data Source

    To copy a table from Adabas to SQL Server using  the SQL Server Import and Export Wizard, first choose the .NET Framework Data Provider for ODBC as the connector type for the data source.

    Next, configure the OBDC connection as follows, filling in the blanks I've excluded from the screen shot for security reasons:

     

    You'll have to fill in the blanks on your own, but here are some hints:

    • You should already have a suitable DSN configured on your development machine, one that uses your RACF credentials to connect to Shadow.
    • Even so, you'll want to supply your "uid" and "pwd" here, as per the DSN
    • The "host" and "port" can also be derived from the DSN

     Choose a Destination

    NOTE: Given that we're going to be copying a table down to SQL, you'll want to create that table in your destination SQL Server database now, if you've not already done so.

    The destination connection type will be SQL Server Native Client.  Point the connection to SQL Server and specify the target database.

    Specify Table Copy of Query

    Because of Shadow's peculiarities, you must choose "Write a query..." and type in the SQL you want to execute to fetch the data from Adabas.  This should be a simple query in the form of:

     SELECT FIELD_1, FIELD_2, FIELD_N FROM TABLE_NAME

    Note the explicit use of underscore characters. While Adabas uses dashes as delimiters, Shadow forces the use of underscores.

    Select Source Tables and Views

    By default, the wizard (stupidly) defaults the Destination to your query, so be sure to select the correct destination table from the non-obvious Destination dropdown.

    Then, with the Source/Destination row highlighted, click the Edit Mappings button and match the field names in your ad hoc SQL query to the columns in your SQL Server table.

     

    Wrap Up

    That's basically it. Click through the rest of the wizard and run your import. 

    If you get a "dynamic mapping" error, you may have tried to import the table directly rather than writing your own SQL, which does not work.

    If you get a security error, the RACF ID you specified may not have access to query the Adabas tables directly.

    The option exists to save the import as an SSIS package, which could be very helpful if you need to repeat the operation.

  • Visual Studio Missing Report Data Window

    by Marc Moore | Oct 18, 2012

    In Visual Studio 2008 – still our SSRS report editing tool of choice – the Report Data panel often “goes missing” and cannot be restored through the Visual Studio menu. 

    The MKC - Magic Key Combination - to restore it is Control+Alt+D.

    image 

  • SSO Build Issues

    by Marc Moore | Aug 06, 2012

    Since moving SSO to the new server, there have been issues getting the application started again after the build is complete.  Symptoms include:

    • SSO not responding to requests for the logon page
    • The PeopleAdmin handler not responding to requests for authentication

    At this time we do not have a solution to the problem, but there is a simple work-around:  Restart SSO’s app pool after the build.  This seems to cure both situations.

    Moreover, after a build it would be prudent to verify the PeopleAdmin handler.  To do this, hit the following URLs in a browser or Fiddler:

     

  • Developer-level Change to SOCommon

    by Marc Moore | Aug 02, 2012

    Our SOCommon database is a read-only database that has commonly-used lookup values, etc., that are used in many of our applications.

    Because SOCommon is so widely used, the change control process for its schema needs to be strictly adhered to.

    Recently the DBA team added a new table/data to the Development instance of the SOCommon database to help with a project I was working on.  After this was done, I:

    1. created a vHRLookupValues view over the new table (the standard for apps querying SOCommon resources requires this)
    2. added the table and view objects to the SOCommon database project in the DEV branch
    3. built the project and created the SOCommon.dbschema file
    4. updated the SOCommon.dbschema file in the GeneratedDbschema folder in the DEV branch

    The first 2 steps are important to keep the database project in sync with the Development database and to keep source control up-to-date.

    Steps 3 and 4 are important to help developers on other projects get the current SOCommon.dbschema file for use in their projects because they should be getting this reference file from the standard location if they need a “hot” change:

    image

    Projects that do not need an up-to-the-minute reference to SOCommon’s schema should get this file from the Prod branch instead.

    At this point I would normally advise the DBA team of the change so they can review the change and handle the promotion to the Training and Prod branches. 

    The above is the essential process for making a change to the SOCommon schema.  Others should follow these steps, at a minimum, to help ensure that the SOCommon database project stays in reasonable shape.

  • Creating an Index on an XML Fragment in SQL Server

    by Marc Moore | Jul 20, 2012

    Imagine you are given a table with a column of XML in it and a few million rows of data.  Further imagine that you want to write an efficient query using an attribute of that XML in the WHERE clause of your query.

    A short test scenario describes how I recently dealt with the problem.  First, a basic table definition.  The XML will end up in the SomeContent field.

    CREATE TABLE [dbo].[MarcTest]
    (
        [ID] [int] NOT NULL,
        [TypeCode] [varchar](20) NOT NULL,
        [SomeContent] [varchar](MAX) NOT NULL
    CONSTRAINT [PK_MarcTest] PRIMARY KEY CLUSTERED
        ([ID] ASC)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO

    Now for some test data.  As in my real-life scenario, the schema of the XML data varies from row to row.

    INSERT INTO marctest (ID, TypeCode, SomeContent) VALUES(1, 'CODE1', '<MyDoc><MetaData><EmployeeID>13791</EmployeeID><Name>Marc Moore</Name></MetaData><ContentData><ContentField1>This is some content.</ContentField1><ContentField2>This is some additional content.</ContentField2></ContentData></MyDoc>')
    INSERT INTO marctest (ID, TypeCode, SomeContent) VALUES(2, 'CODE2', '<MyDoc><MetaData><EmployeeID>4</EmployeeID><Name>Xi Zhang</Name></MetaData><ContentData><ContentField1>This is some content.</ContentField1><ContentField2>This is some additional content.</ContentField2></ContentData></MyDoc>')
    INSERT INTO marctest (ID, TypeCode, SomeContent) VALUES(1001, 'OTHER1', '<SomeOtherData><Data1>111111</Data1><Data2>222222</Data2>333333<Data3></Data3></SomeOtherData>')
    GO

    It’s not possible to create an index directly on an XML column’s XPath expression and the XML indexes offered by SQL Server 2008 have a bad reputation for being bloated.  So what to do?  The solution that follows was blessed by David, our DBA, as a valid one.

    Based on this question and answer about SQL, XML, and XPath, what I first wanted to do was create a computed column based on an XPATH expression like this one:


    image

    Unfortunately that does not work.  There are a couple of reason for this: The SomeContent column is not an XML column, which doesn’t help, nor does the fact that SQL considers this expression non-deterministic.

    Various articles led me to create this scalar function to help solve the problem:

    FUNCTION MarcTestEmployeeID
    (
        @SomeXml xml
    )
    RETURNS INT
    AS
    BEGIN

        DECLARE @ret INT = @SomeXml.value('(/MyDoc/MetaData/EmployeeID/node())[1]', 'int')
       
        IF @ret IS NULL
            SET @ret = 0
           
        RETURN @ret

    END

    Now SQL was happy and let me create the computed column

    ALTER TABLE MarcTest ADD EmployeeID as ([dbo].[MarcTestEmployeeID]([SomeContent]))

    Unfortunately, when I attempted to create my index:

    CREATE INDEX MarcTestEmployeeID_IDX ON dbo.MarcTest(TypeCode, EmployeeID)

    SQL became unhappy once more:

    Msg 2729, Level 16, State 1, Line 2
    Column 'EmployeeID' in table 'dbo.MarcTest' cannot be used in an index or statistics or as a partition key because it is non-deterministic.

    I’m sure SQL has its own reasons for doing this, but its logic was clearly incorrect as far as my function goes.  Given a blob of XML, dbo.MarcTestEmployeeID will always return the same answer for that XML; ergo, it is deterministic.  Yet SQL remained unhappy.

    Happily, this question and answer about deterministic scalar functions solved the issue with a one-line change to the function definition:

    FUNCTION MarcTestEmployeeID
    (
        @SomeXml xml
    )
    RETURNS INT
    WITH SCHEMABINDING

    After creating my index, my sample “SELECT *” query returned the following, which is just what I would expect:

    image

    Next, I inserted a couple of thousand more records to make things more interesting and ran this query, which is similar to the one I plan to run in the real world:

    SELECT * FROM MarcTest WHERE TypeCode = 'CODE1' AND EmployeeID = 13791

    The execution plan indicates that my new index that’s built on the computed column using the UDF and XML data is being used:

    image

    Exactly what was hoped for!

  • Replacing an External Reference in a Database Project

    by Marc Moore | Jul 20, 2012

    Initially I thought I could just check out my project’s SOCommon.dbschema file, copy over the new version, and rebuild my database project to determine if it was compatible with the new external database’s schema; however, this did not seem to work.

    Removing the reference to the SOCommon.dbschema file and re-adding it using the new version of the file made things worse because I did not understand how to use the Add Database Reference dialog box and failed to set up the correct Database Variable.

    For some ESI projects we used SQL command variables to define the name of the SOCommon database (because it wasn’t always SOCommon in every environment).  The correct Database Reference settings for such a project are:

    SNAGHTML2305a8f3

    For other projects the references to SOCommon are made directly in the SQL objects as literals.  For such projects the correct Database Reference settings are:

    SNAGHTML23073347

    We suppress the errors caused by references in the external project because we generally don’t care about shortcomings of the SOCommon database project in regards to its referencing of the Data Warehouse and Ace Project databases.