Quantcast
Channel: The PLM Dojo » Programming
Viewing all 16 articles
Browse latest View live

Interpreting ITK Error codes

$
0
0

It’s late in the day and you’re running on fumes. You decide to run your code through one last unittest before calling it quits for the day.

And then it happens. Ba-boom! It all blows up. Unhandled exception. Error code 74708. No error message recorded. Where do you start looking for the cause?

One thing that can help is to be able to find out what that particular error code means, so knowing a bit about how they’re defined, and where, can be helpful.

All the error codes that Teamcenter uses are #defined within the header files. Generally speaking each module has it’s own range of error codes which usually allows for at least 100 distinct error codes although it can allow for several thousand. Typically only a small subset of that range is actually assigned a value. The remaining space is reserved for future expansion.

The base value for each range of error codes will be #defined something like this:

#define MODULE_error_base <em>nnn</em>000

Where nnn are some numeric digits. The actual size of the error_base number will vary from module to module, but it’ll always end in at least a couple of zeros. The error_bases are all defined in a couple of header files: error_bases.h and emh_const.h mainly. There may others, but honestly, memorizing which headers define the error_bases isn’t all that important, as you’ll see later.

There will also be another header file that defines the actual error codes for the module. Within that header file the  error code definition will follow this pattern:

#define MODULE_first_error  (MODULE_error_base + 1) 
#define MODULE_second_error (MODULE_error_base + 2)
#define MODULE_third_error  (MODULE_error_base + 3) 
// etcetera...

So now, knowing this you can track down what the error code means.

1. Figure out the base number for this error code

In our example the error code is 74708. It’s probably a pretty safe bet to assume that the error_base for this code is 74700. Our second guess would be 74000, but it would be very unusual for a module to #define over 700 different error codes.

2. Find the name of the error base.

Now you could open up each the header files that #defines error bases and look for the value 74700, but I find it easier to just do a full search of the include directory for any header that contains that value. When we do that we discover that in the header file emh_const.h, there’s this entry:

/** CE ERRORs */ 
#define EMH_CE_error_base 74700

So the error_base we’re concerned with is called EMH_CE_error_base.

3. Find the header file that #defines the specific errors codes for that error_base

In other words, search the header files for the term EMH_CE_error_base and see what turns up. When we do that  we discover that EMH_CE_error_base shows up in the header file ce_errors.h

4. Find the definition for this specific error code

Within ce_errors.h we find the following lines:

#define CE_ERROR_BASE                   EMH_CE_error_base 
 
#define CE_init_error                   ( CE_ERROR_BASE + 1 ) 
#define CE_no_load_usersession_object   ( CE_ERROR_BASE + 2 ) 
#define CE_no_type                      ( CE_ERROR_BASE + 3 ) 
#define CE_no_properties                ( CE_ERROR_BASE + 4 ) 
#define CE_invalid_data_type            ( CE_ERROR_BASE + 5 ) 
#define CE_data_type_not_supported      ( CE_ERROR_BASE + 6 ) 
#define CE_argument_number_out_of_range ( CE_ERROR_BASE + 7 ) 
#define CE_invalid_argument_type        ( CE_ERROR_BASE + 8 ) 
#define CE_invalid_global_operation     ( CE_ERROR_BASE + 9 )

So you see that here that got a little bit tricky, but not too tricky. First they re-#defined EMH_CE_error_base as CE_ERROR_BASE. Then they went along with the usual pattern for #defining error codes. We’re looking for 74708, or in other words CE_ERROR_BASE + 8. And there it is, CE_invalid_argument_type. Now I have a clue why my code crashed. Somewhere I’m calling a CE_ function (CE is the Condition Engine module, by the way) and I’m passing the condition an invalid argument. So, my next task is to look for which conditions I’m calling with a CE function and double checking what data types the Condition is expecting versus what data type I’m actually passing.

In this specific example it turned out that I was invoking a Condition that expected an argument of type Item, but I was passing it the tag_t of a ItemRevision instead. Once I passed the Condition an Item instead the bug was fixed.

Questions

Have you ever been stumped by a mysterious ITK error code with no associated error message?


The post Interpreting ITK Error codes appeared first on The PLM Dojo.


Working with Conditions in ITK code

$
0
0

Previously, I’ve addressed the basic question, what are conditions? and how they are used in the BMIDE. Today I’m going to talk about how I’m finding them to be a useful programming aid. By incorporating Teamcenter’s Conditions in ITK code you can make your ITK code simpler, more flexible, and easier to maintain.

A great advantage of Conditions is that they allow you to access an objects properties without writing any ITK code. Now I am perfectly comfortable writing ITK, but I also appreciate that when you have to build and deploy DLLs to update your system that the maintenance effort is not trivial. Any opportunity I have to get TC to behave the way I need it to without having to write code is a good thing.

ITK, To get the job done

Of course, the truth of the matter is that you often have to write ITK get the job done, because let’s face it, ITK gives you the power to redefine and extend Teamcenter. In my ITK code I often find that I need to check the state of an object in order to determine what needs to happen next. For example, is the dataset checked in? Is the revision statused? What is the owning group? In Teamcenter Engineering I would have to write code to access the property values I wanted and then code up the evaluations themselves. And if I somehow made a mistake in my code or if the requirements changed I would have to update my code and rebuild and re-deploy the DLL. For example, if it was someday decided that it wasn’t sufficient to simply test if a revision was statused or not and instead I had to test if it specifically had statusA or statusB, then that would require a code change.

Advantages of using Conditions in ITK

Conditions allow for a better solution. First, I can define a Condition in the BMIDE that does the appropriate check:

isStatusCorrect(ItemRevision rev) := 
    rev.last_release_status.name = "Approved"

and then use that condition in my code:

    CE_find_condition("isStatusCorrect", &condition); 
    CE_evaluate_condition(condition, 1, &revision, &result); 
    // do something with result...

More compact

The first advantage is that writing the code to do the same evaluation the the condition does with the expression rev.last_release_status.name = "Approved" would take several lines of ITK. First you would have to retrieve the last_release_status property of the revision object, then you’d check whether or not the status returned was NULLTAG, meaning the revision has no status, then if it wasn’t NULLTAG you’d have to get the name property from the status object, and finally compare that value to the string “Approved”. If you assume that the number of bugs in a program is proportional to the number of lines of code, then a one line expression is quite an improvement over the half dozen or so lines of code you’d need to do the check with ITK.

BMIDE validation of expression

The second advantage is that it is entirely possible to write ITK code that compiles and build perfectly, and yet is fatally flawed. For example, if you asked for the “latest_release_status” property instead of the “last_release_status”, your code would appear to be fine but fail at runtime. However, when you create conditions the BMIDE knows what attributes are available to you at each step of the expression. When you type the period after “rev” it’ll offer you all the legal property names you can choose at that point. And if you do manually enter in a property name that is incorrect it’ll flag the expression as incorrect and not let you save it to the template.

Now, granted, when you do write the ITK code you have to get the name of your condition correct, but that’s only one potential failure point as opposed to the two in my example — one when you look up the “last_release_status” attribute, another when you look up the value of the “name” attribute.

Live update fixes

Finally, let’s see what happens when someday the requirements change. Say, for example, that the requirement that the revision have the Approved status is changed to say that the status can be either Approved OR Frozen. To update the condition you’d merely need someone to update the condition like so:

isStatusCorrect(ItemRevision rev) := 
    rev.last_release_status.name = "Approved" 
    OR 
    rev.last_release_status.name = "Frozen"

That’s a task that any BMIDE administrator can do without anyone needing to make any changes to the ITK code. If you’re familiar with the classic template design pattern you’ll recognize that that’s essentially what we’ve used; the ITK defines the skeleton of the algorithm and the conditions implement key bits of functionality.

The post Working with Conditions in ITK code appeared first on The PLM Dojo.

Exceptional Error Handling in Teamcenter ITK and NX Open C

$
0
0

The normal way of testing for and handling errors when calling the Teamcenter ITK or UG/NX Open C APIs is to #define a macro to wrap your API calls in. Macros are a problematic and flawed solution however. If you’re compiling your code as C++ you can use a much safer inline function and use exception handling to provide a much more flexible and robust error handling mechanism.

Note: We’re covering some basic programming today. I’m interested to hear if this is too basic or not.

Simple Error Handling

Both the ITK and NX Open C APIs — like many C language APIs — follow the convention that every function returns 0 if it completes without errors or some other value if it doesn’t.

This means that a naive but diligent coder might write something like this:

    int result = 0; 
    result = API_do_something(input, &output); 
    if( result != 0 )
    { 
        // get error message 
        // handle error or quit? 
    } 
    result = API_do_something_else(input, &output); 
    if( result != 0 )
    {
         // get error message 
         // handle error or quit? 
    } 
    // etc. etc.

Error Handling with Macros

Naturally, all those assignments to result and conditional checks afterwards grow tiresome. So the next thing that’s often done is #define a macro or two to simplify matters. If you search for the phrases, “sample itk program” or “sample open c program” on the GTAC website you’ll find lots of ITK and Open C code examples(1). Typically in those samples you’ll see something like this,

//**** ITK Example ****//
#define IFERR_ABORT(X)  (report_error( __FILE__,__LINE__,#X,X,TRUE))
#define IFERR_REPORT(X) (report_error( __FILE__,__LINE__,#X,X,FALSE))
 
static int report_error(char *file, int line,
                        char *call, int status,
                        logical exit_on_error)
{
    if (status != ITK_ok)
    {
        // getting error message and writing it
        // to syslog omitted for clarity */
 
        // AIIIEE!! Kill the Program, NOW!!!
        if (exit_on_error) exit(status);
    }
    return status;
}
 
// Use the macro. Program dies here if something goes wrong.
// Have fun looking through the syslog for the error message.
IFERR_ABORT(AOM_refresh(itemRev, FALSE));

The Problem with Macros

That works, to an extent. But there are some drawbacks. First, you only have two options: write a message to the syslog or exit the program — hard. But what if you can do something to handle the error and proceed? Sorry, not supported unless you #define another macro specifically for that or manually insert code specifically for that one condition. And what if the proper place to handle a particular condition is a few levels up in the call stack? You’ll just have to make sure all the calls in the stack know to pass that particular error back to their caller.

And if that’s not enough for you, I’ll appeal to the authority of Scott Meyers, author of the absolutely essential for C++ programmers, Effective C++, who entitled item 1 of his book, “Prefer const and inline to #define“, and Herb Sutter and Andrei Alexandrescu who also covered the same topic in the also-essential C++ Coding Standards. They summarized their objections as such:

TO_PUT_IT_BLUNTLY: Macros are the bluntest Instrument of C and C++’s abstraction facilities, ravenous wolves in functions’ clothing, hard to tame, marching to their own beat all over your scopes. Avoid them.

Suffice to say, they convinced me. If you need more convincing I suggest you look up their full argument in their books.

Exceptional Error Handling

If you haven’t guessed yet, I like to write my programs in C++ instead of C. The advantages of C++ are just too great; there’s too many useful tools in C++ to limit yourself to C (in my opinion). One tool provided by C++ is exception handling with try/catch blocks. I presume that most programmers are already familiar with the concept, either from C++ itself or from one of the many other languages with similar functionality, so I won’t discuss it in depth.

With the twin goals of avoiding macros like the black death and utilizing exception handling, a superior alternative to the canonical macros can be developed. First, we’ll define a custom exception type:

class ITKError : public std::exception
{
private:
    int _error;
 
public:
    ITKError( const int error )
    : exception( ITKError::get_message(error).c_str() ),
      _error(error)
    {
        EMH_clear_last_error(error);
    }
 
    int error() const { return this-&gt;_error; }
 
    static std::string get_message(const int error)
    {
        // Use EMH_ask_errors or
        // EMH_ask_error_text to
        // retrieve an error message
    }
};

Next we’ll create our alternative to the IFERR_ABORT(X) macro:

inline void CHECKITK(const int error)
{
    if(error) throw ITKError(error);
 
    return; // no error, so do nothing.
}

Now we have some more robust options available to us for handling errors:

int inner(int argument)
{
   // No error handling at all if something goes wrong
   CHECKITK( API_some_function(argument) );
}
 
int middle(int argument)
{
    try
    {
        inner(argument);
    }
    catch(const ITKError &amp;e)
    {
        // do some cleanup...
        cleanup();
 
        // pass the error along
        throw;
    }
}
 
int outer(int argument)
{
    try
    {
        middle(argument);
    }
    catch(const ITKError &amp;e)
    {
        // two errors we can handle, the rest we throw
        if( e.error() == API_first_error_condition )
        {
             handle_first_error_condition();
        }
        else if( e.error() == API_second_error_condition )
        {
             handle_second_error_condition();
        }
        else
        {
            // no special handling possible, 
            // just pass the error on up.
           throw;
        } 
    }
}

So instead of simply reporting the error and bombing out, this in this example if inner() hits an error it throws an error and exits, passing control back to its caller, middle(), which does some cleanup and then passes the error on to its caller, outer(), which is able to handle two specific error conditions but passes any other error up the chain.

And that, I think, is a much better way to check for, and handle, errors.

Addendum: A warning about Exceptions and C

If you’re writing either Teamcenter ITK or NX Open C programs, your “main” function, e.g. ufusr() in Open C or a …register_callbacks() function in ITK will need to be declared as extern "C". Unless you enjoy it when programs blow up with nasty error messages you need to be sure that you do not let any exceptions escape from an extern "C" function. Exceptions are C++, extern "C" functions are straight C. They don’t know how to handle exceptions and if one does come through the system will not know what to do with it. Then it’s boom-boom time.

To be absolutely safe, wrap all of your calls inside of a try{} block and provide a fail-safe catch(...) block which will capture any errors that are thrown.

Example

extern "C" extern DLLAPI
int MY_CUSTOM_LIB_register_callbacks()
{
    try
    {
        do_everything();
    }
    catch(const ITKError &amp;e)
    {
        report_itk_error(e);
        return e.error();
    }
    catch(...) // catch anything else that might be thrown
    {
        report_unknown_error();
        return UNKNOWN_ERROR;
    }
    return 0; // no errors caught, everything is okay.
}

Now you’re covered and no exceptions will escape to where they shouldn’t be.

The post Exceptional Error Handling in Teamcenter ITK and NX Open C appeared first on The PLM Dojo.

How to use PDM Server to call ITK from NX Open

$
0
0

If you’re programming under an NX Manager environment — that is, programming for an NX session that’s connected to Teamcenter — sooner or later you’re going to want to interact with Teamcenter for something. Maybe you need to get some information about a part that the NX Open API can’t provide, such as asking the owning group of a part. Or maybe you need to get Teamcenter to do something, like submit a part to a workflow.

Now, you might be tempted to try something like mixing ITK code with your NX Open code, and you
might get it to compile, but you’ll likely have trouble linking and you certainly won’t get it to run.

Why?

Welcome to DLL Hell, my friend. Fortunately, there’s a way out, and it’s called PDM Server

DLL Hell: Mixing NX Open and ITK

I remember when I was a naive young programmer. I thought that since NX and Teamcenter were both made by the same company and worked together that surely I could use functions from both the ITK and Open C APIs in a program. Open a part, update its BOM, send it to a workflow. Should be easy, right?

Mwuuuu-haa-ha!

I don’t remember just how long I beat my head against that particular wall but finally I got the idea that it just wasn’t going to work.

I can’t really explain why you can’t mix ITK and Open C. Sure, there’s technical details I do know, like how a tag_t in Open C is keeping track of an object in the CAD file but a tag_t in ITK is keeping track of an object in the Teamcenter database. Or I could go into how there’s incompatible definitions of some functions used internally by both sets of libraries. That means that if you link to both libraries one library or the other is going to pull in a reference to the wrong function definition and throw a hissy fit.

What I can’t tell you is why that can’t all be ironed out and resolved. Maybe it’d just be too huge of a undertaking for too little gain for a company in a competitive marketplace. Maybe that much integration between the NX and Teamcenter would cause them some sort of anti-trust hassle. Maybe there’s some technical issue involved that I just don’t comprehend.

Regardless, what I do know is that there are a few ways of getting around this issue. And first among those methods is something called PDM Server. You need to know how to use it. (or so I assume since you’ve read this far already. I can’t imagine someone who isn’t dealing with this mess not having bailed four paragraphs ago).

I’ll be honest, I am on record elsewhere saying that I’m not a huge fan of PDM Server. But it does have three things going for it:

  1. It works.
  2. It’s Simple.
  3. It’s the standard.

What that means is that you’re (a) likely to see it in other people’s code (b) if you’re stuck, there’s a pretty good size pool of users out there who can help (well, “good size pool” in terms of the admittedly small pond that is ITK and Open C programmers). So, you should take it upon yourself to understand how it works.

How it Works

Well, that was a subtle segue.

PDM Server is sort-of/kind-of a primitive form of a Service Oriented Architecture (SOA) for NX and Teamcenter. It lets you pass input from NX to Teamcenter and then get back from Teamcenter some output. Your input and output are both a single integer and a single char* string. What their values are and what those values mean is entirely up to you; you just have to be sure that your NX and Teamcenter code have a common understanding.

PDM Server: Common header file

Since both NX and Teamcenter have to agree on what integer codes are passed back and forth and what they mean, I like to create a header file which will be #included in both sets of code. Within it I define the integer values that will be used.

// file: plmdojo_pdmserver_codes.hxx
enum
{
    PLMDOJO_ask_owning_group,
    PLMDOJO_ask_item_type,
    PLMDOJO_submit_to_workflow
}

PDM Server: Teamcenter side

On the Teamcenter side you have a little bit of work to do. Here’s a breakdown:

  1. Pick a name for a customization library. In the examples below I’m using libplmdojo
  2. In your preferences, register your customization library by adding its name to the preference TC_customization_libraries
  3. Create the actual user exit library. This is a shared library (i.e. a DLL on Windows) that you write yourself. Within it you need to create at least two functions:
    1. The actual function that will be executed when a PDM Server call is made from NX. Example:
      extern "C" // necessary if you're 
                 // compiling as C++ (which I recommend)
      DLLAPI int PLMDOJO_invoke_pdm_server(
          int *decision, va_list args)
      {
         // discussed later
      }
    2. A function named your_library_name_register_callbacks(). This function needs to call CUSTOM_register_exit() to register the function defined above as the callback function for USER_invoke_pdm_server.
      // Example: Registering a user exit callback function
       
      extern "C" // for C++ compilation
      DLLAPI int libplmdojo_register_callbacks()
      {
          CUSTOM_register_exit(
              "libplmdojo",             
              // my library name
       
              "USER_invoke_pdm_server", 
              // the user exit we're customizing
       
              (CUSTOM_EXIT_ftn_t)PLMDOJO_invoke_pdm_server 
              // the callback function
                              );
       
          return 0;
      }
  4. Implement your callback function. This breaks down as follows:
    1. Extract the parameters passed to the callback from the va_list argument using the standard C va_arg() macro.

      Every User exit has a different signature to which the va_list passed to its callbacks expands. For USER_invoke_pdm_server the va_list is
      int input_code,
      char* input_string,
      int* output_code,
      char** output_string
    2. Set the decision input parameter to ALL_CUSTOMIZATIONS, ONLY_CURRENT_CUSTOMIZATION, or NO_CUSTOMIZATION.

      NO_CUSTOMIZATION means to just use the default implementation of USER_invoke_pdm_server, which obviously doesn’t do us much good for this example so we won’t use that. I’ll use ALL_CUSTOMIZATIONS and let other libraries have a crack at handling their own PDM Server calls.

    3. Pass the input_code to a switch or if/ else if statement. I like switch
    4. within each case: statement (or if block) pass the input string to a function of your own making which will parse the input string, execute whatever ITK calls are appropriate, set the output code, and determine what should be in the output string (if anything).
    5. Allocate memory for the output string using malloc() and populate it.
    // Example: Implementing a PDM Server callback
     
    // prototypes for helper functions. Definitions not shown.
    int ask_owning_group(
        const std::string &input_str, 
        std:string &output_string);
     
    int ask_item_type(
        const std::string &input_str, 
        std:string &output_string);
     
    int submit_to_workflow(
        const std::string &input_str, 
        std:string &output_string);
     
    DLLAPI int PLMDOJO_invoke_pdm_server(
        int *decision, va_list args
        )
    {
      // get the input parameters from the va_list:
      int    input_code    = va_arg(args, int);
      char*  input_str     = va_arg(args, char*);
      int*   output_code   = va_arg(args, int*);
      char** output_str    = va_arg(args, char**);
      // set decision. We'll let other customizations have
      // a chance to handle this PDM Server call too.
      *decision = ALL_CUSTOMIZATIONS;
     
      // I do all intermediate string processing with std::string's 
      // instead of char* strings. Later I'll copy the final 
      // std::string's contents to the char* output_str
      std::string output_string;
     
      // pass the input_code to a switch():
      switch(input_code)
      {
      case PLMDOJO_ask_owning_group:
        *output_code = ask_owning_group(input_str, output_string);
        break;
      case PLMDOJO_ask_item_type:
        *output_code = ask_item_type(input_str, output_string);
        break;
      case PLMDOJO_submit_to_workflow:
        *output_code = submit_to_workflow(input_str, output_string);
        break;
      default:
        *decision = NO_CUSTOMIZATION; // do nothing
        // code may be meant for another library using PDM Server
      }
     
      // allocate memory for output string and copy 
      // std::string's contents to it:
      *output_str = malloc(sizeof(char) * (output_string.length()+1));
      strcpy( *output_str, output_string.c_str() );
     
      return;
     
    }

    PDM Server: NX Side

    Once the Teamcenter side is configured all you have to do on the NX side is call a function called UF_UGMGR_invoke_pdm_server. Note the prefix: USER in Teamcenter, but UF_UGMGR in NX. Its prototype looks like this:

    int UF_UGMGR_invoke_pdm_server(
        int input_code,       // input
        char* input_string,   // input
        int* output_code,     // output
        char** output_string  // output to be freed
                                  )

    The parameters passed to UF_UGMGR_invoke_pdm_server are exactly the same as those handled by the custom callback you defined, and now you’re all set:

    // Example: Submitting a part to a workflow from NX
    tag_t part = UF_PART_ask_display_part();
     
    char encoded_name[MAX_FSPEC_SIZE+1];
    UF_PART_ask_part_name(part, encoded_name);
     
    char cli_name[MAX_FSPEC_SIZE+1];
    UF_UGMGR_convert_name_to_cli(encoded_name, cli_name));
     
    // input string is workflow_template_name:cli_name
    std::string input_string = "ReleasePart:" + cli_name;
    char* job_name = NULL;
    int output_code = 0;
     
    // submit the current part to the ReleasePart workflow:
    UF_UGMGR_invoke_pdm_server(
        PLMDOJO_submit_to_workflow, 
        input_string.c_str(), 
        &output_code,
        &job_name);
     
    // ...do something with job_name...
     
    UF_free(job_name); 
    // ...etc...

    And there you have it, all the pieces you need to make NX talk to Teamcenter. With this you do or retrieve most anything in Teamcenter. What will you use it for?

    The post How to use PDM Server to call ITK from NX Open appeared first on The PLM Dojo.

How to Control Part Numbers with User Exits

$
0
0

Previously we looked at how to control part numbers with Naming Rules. Naming rules work great for many cases, but they do have limitations. The counters they use to generate a part number when a user clicks the Assign button are limited, and naming rules cannot make any corrections, other than case, of the values entered. Fortunately, they are not the only tool in the Teamcenter administrator’s toolbox. To get to the next level of control over part numbers you can develop your own user exit functions to customize how part numbers are generated and validated. So, let’s review how User Exits work, the types of things you can do with them, and how to customize them.

What is a User Exit?

A User Exit is an ITK function that Teamcenter calls during the course of its operation and which customers can override by writing their own ITK code to replace the default behavior. The user exits functions are prefixed USER_ and are defined in the user_exits.h header file. Some of the ones that are useful as an alternative (or in addition) to naming rules are:

  • USER_validate_item_rev_id()
  • USER_new_item_id()
  • USER_new_revision_id()
  • USER_new_dataset_name()
  • USER_new_folder_name()
  • USER_new_form_name()

USER_validate_item_rev_id() is used to validate the item_id and item_revision_id fields of a newly created item, and the USER_new_* user exits are used to supply a new ID or name value when a user clicks on an Assign button. Note: user exits aren’t just for generating and validating ID and name fields; there are many others besides these that are related to completely different areas of Teamcenter, such as CAE, BOM comparisons, and validation results.

Why Use User Exits?

If you understand how naming rules work already you may be asking why you’d use user exits instead. After all, naming rules already perform the task of validating item_id and item_revision_id, AND you can define counters to generate new IDs upon demand, so what’s so great about user exits?

The answer, is flexibility.

Consider some examples:

  1. Your naming authority for issuing new part numbers is something other than Teamcenter. This system has a web service for issuing part numbers. Typically users use a web application to obtain the next available number and then manually key that into Teamcenter. Using a user exit for USER_new_item_id you could automate this process. Your user exit could call the web service, obtain the next part number, and provide that to the user automatically. A naming rule could not do that.
  2. Dataset names typically default to item_id/revision_id, so if the item ID is 100-20000 and the revision ID is 01, then the dataset is named “100-20000/01″.In Teamcenter Engineering the width of the item ID field was 32 characters, and the width of the dataset name field was… 32 characters. Do you see the Problem? What if the item ID was 30 characters long? Then the default dataset name would be 33 characters wide (30 for the item ID + “/01″ = 33). I have seen this happen while migrating legacy data into Teamcenter; it isn’t pretty.

    A solution is to create your own user exit for USER_new_dataset_name that checks the length of the item ID before generating the dataset name, abbreviating the name if necessary. Note: in Teamcenter 8+ the size of the fields is much larger, 128 characters, so the problem isn’t as likely — but I wouldn’t bet on it not happening

  3. Besides simply validating the proposed item IDs, USER_validate_item_rev_id also allows you to alter the item ID and either propose the updated ID to the user or force the user to accept it. For example, you could force a mandatory prefix or suffix on all item IDs (do nothing if the user already added it, but add if for them if they didn’t) or you could replace illegal characters, such as replacing all spaces with underscores. All naming rules cand do is convert the ID to either uppercase or lowercase.

How to Customize a User Exit

There are two ways to do this:

Option A: Rebuild libuser_exits.dll

You have the option of editing some sample source code files provided with Teamcenter and then recompiling the default user exit shared library, libuser_exits.dll/so/sl. user_part_no.c contains the code for the user exits we’re discussing today. That method is documented in the User Exits module documentation (from the HTML documentation for Teamcenter, Customizing Teamcenter → Integration Toolkit (ITK) Function Reference → Modules → User Exits). If that method appeals to you, go read that. Personally, I don’t like that method at all. For one thing, the source files you edit have a lot in them; more than I’ve ever needed to customize, and I’m afraid of unintentionally altering the behavior of a user exit that’s entirely unrelated to my intended task. Mucking around with a DLL supplied by the vendor just seems like a bad idea to me.

So, I prefer…

Option B: Registering a Custom Replacement Function

In a nutshell, what you do is write your own function to perform the task of the user exit, and then call another ITK function to register your function as a replacement for the user exit. The actual process is very similar to how your go about setting up PDM Server.

  1. Pick a name for a customization library. In the examples below I’m using libplmdojo
  2. In your preferences, register your customization library by adding its name, without the file extension, to the preference TC_customization_libraries. When Teamcenter starts it will look for the libraries mentioned by this preference and load them.
  3. Create the actual user exit library. This is a shared library (i.e. a DLL on Windows) that you write yourself. At a minimum it will contain the functions described below.
  4. Within your custom library implement the actual function that will replace the user exit you’re overriding. Example:
    extern "C" // for C++ compilation
    DLLAPI int PLMDOJO_validate_item_rev_id(
        int *decision, va_list args)
    {
       // whatever you want to do...
    }
  5. A function named your_library_name_register_callbacks(). This function needs to call CUSTOM_register_exit() to register the function defined above as the callback function for USER_invoke_pdm_server.
    // Example: Registering a user exit callback function
     
    extern "C" // for C++ compilation
    DLLAPI int libplmdojo_register_callbacks()
    {
        CUSTOM_register_exit(
            "libplmdojo",
            // my library name
     
            "USER_validate_item_rev_id",
            // the user exit we're customizing
     
            (CUSTOM_EXIT_ftn_t)PLMDOJO_validate_item_rev_id
            // the callback function
                            );
     
        return 0;
    }

    When the library is loaded at launch (because it was listed in the TC_customization_libraries preference), the _register_callbacks function is executed. This will make sure that your customizations are registered to be called in place of the default user exit.

  6. Within your own function, extract the parameters passed to the callback from the va_listargument using the standard C va_arg() macro. 

    The specific parameter list you’ll expand will match that of the user exit you’re replacing and the way you use them will be the same. So if one of the parameters of the original user exit is an in/out parameter that you can modify, then the corresponding argument of the va_list will also be an in/out parameter that you can modify.

  7. Set the decision input parameter to ALL_CUSTOMIZATIONS, ONLY_CURRENT_CUSTOMIZATION, or NO_CUSTOMIZATION, as appropriate.For example, you may just set it to ALL_CUSTOMIZATIONS so that your code is always invoked, or you may do something like check the item type and then set ALL_CUSTOMIZATIONS for engineering types but NO_CUSTOMIZATION for manufacturing types so that they use the default implementation instead.

Addenda

Using Naming Rules and User Exits

Some quick points:

    1. You can still attach a naming rule to an item ID if you are providing your own callback function for USER_validate_item_rev_id(); the naming rule will be evaluated after the user exit completes (see the diagram below). This might be useful if you’re primarily using the user exit to modify the input before passing control back to Teamcenter and the naming rule validation.
    2. User exit sequence

      Sequence of user input, user exits, and naming rule checks during item creation (click to enlarge)

    3. There is a NR_ module for using naming rules in ITK, so you can invoke a naming rule from a user exit, which could be used to validate the input against the naming rules and then to modify it if it fails.
    4. User exits don’t just work in the Teamcenter user interface. For example, they are invoked by the Assign button in the NX new part dialog as well. I believe (but haven’t verified) that they are called by the ITK function ITEM_create_item when the item_id parameter is NULL. (The documentation states that if the item_id is null that, “the system will automatically generate an Item ID.” I presume that involves calling the user exit and not just a naming rule)

    Studying user_part_no.c

    Although I don’t like the approach of editing the source files provided with Teamcenter, it is very instructive to study them, especially user_part_no.c. They can be found under %TC_ROOT%/sample. You’ll get a much clearer understanding of how exactly your custom callback functions are invoked by the user exits.

    Where Naming Rules Have an Advantage

    Before Teamcenter “Unified”, naming rules could only validate a few fields, primarily item and revision IDs. Now, however, naming rules can be attached to pretty much any string attribute, which greatly increases their uses. User exits, however, only exist for a few specific fields.

    The post How to Control Part Numbers with User Exits appeared first on The PLM Dojo.

How to Automate NX Migrations with Autotranslate

$
0
0

If you’re going to migrate very much NX CAD data into Teamcenter, even if you’re going to do it manually[1] by using NX’x File → Import Assembly into Teamcenter… there’s one customization you should very seriously consider: a custom Autotranslate function.

A custom Autotranslate will give you the the ability to automate how native file names are translated into Teamcenter IDs, alleviating the need to have users manually enter them in. It can be used whether you’re importing data using NX’s dialogs or their command line utility, or if you’re developing your own program to handle the migration for you.

Autotranslate

In the import dialog, one of the options you may select for setting the item IDs is Autotranslate. The autotranslate option looks at the file names being imported and converts them into the Item ID and revision ID to be used in Teamcenter. NX comes with a default implementation of autotranslate; if memory serves it takes anything up to the last _ character to be the item ID and anything after it to be the revision ID. If that works for you then you can stop reading now and go find something else to do. The rest of you, stick around.

BYOA (Bring Your Own Autotranslator)

The interesting thing is that you have the ability to supply your own autotranslate function custom written for your own data. This opens all sorts of possibilities. The first is being able to handle different revision identification schemes besides the default. But you’re not limited to just that. You can transform improperly named file into correctly identified items, perhaps by removing extra prefixes or suffixes, converting underscores to dashes (or vice versa), or removing illegal characters. You could also consult a renaming table that maps native file names to item IDs (x:\foo\bar\mounting_plate.prt = 1234567/A). What you do with it is up to you.

Writing your own Autotranslator

Just to make sure we’re all on the same page: although we mainly discuss Teamcenter here, the code shown is NX Open C, not Teamcenter ITK. Autotranslate is a function of NX, not Teamcenter

The basic template for your autotranslate function is something like this:
extern "C" DllExport // export for unit tests
int plmdojo_autotranslate( const char input[MAX_FSPEC_SIZE + 1],
                           char output[MAX_FSPEC_SIZE + 1])
{
	if ( input[0] == '@')
	{
	   // export
	   return tc_to_native(input, output);
	}
	else
	{
	   // import
	   return native_to_tc(input, output);
	}
}

Input and Output

Although we’re primarily focusing on using autotranslate for importing data into Teamcenter, it can also be used to export data out of Teamcenter. You have to check your input to understand which direction the autotranslate is being used. The native OS side will be a file name, with or without the full path. The Teamcenter side will be in the CLI (command line interface) form used by several command line utilities: @DB/item_id/revision_id. So for imports the input parameter will be a file name and the output will be a CLI string. For exports it will be the other way around, input will be in CLI form and output will be a file path, hence the if(input[0] == '@') test to determine which applies.

Registering a custom Autotranslate Function

You instruct NX to use your autotranslate function instead of the default with the UF_UGMGR_set_clone_auto_trans() function. Loading a DLL at NX start up that contains this function would do the trick:

extern "C" DllExport 
void ufsta( char *param, int *returnCode, int rlen )
{
    UF_initialize();
    UF_UGMGR_set_clone_auto_trans(plmdojo_autotranslate);
    UF_terminate();
}

Search the NX documentation for Automatic Loading At NX Start-up for information on loading shared libraries at start up.

Testing and Refining

You’ll note that I declared my custom autotranslate function as extern "C" DLLExport. This is so I can call it directly from another program so I am able to unit test it outside of NX. I strongly encourage unit testing your function. Every time you find something you need to correct, add a new unit test for it and re-run the entire suite of tests every time you update your autotranslator.

Additionally, I suggest you get a file listing of every file you might possibly import and write a program to call your autotranslator on every one of them and then write the results to a CSV file that you can open in Excel. You’ll learn a lot about your data and what special cases you need to check for in autotranslate. You may not be able to write a function that handles 100% of the file names perfectly (there’s always that one screwball file), but if you can write one in a reasonable amount of time that handles 80%–95% of your data, then it’s definitely worthwhile.

Personally, I write my these sorts of programs in Python, but that’s a topic for another day…


[1] I hesitate calling any task done with a computer a manual task. Sitting at a desk pushing buttons is not manual labor.

The post How to Automate NX Migrations with Autotranslate appeared first on The PLM Dojo.

Using Python to Unit Test NX Open (or ITK) C Code

$
0
0

Previously when I discussed creating a custom autotranslate function to assist in NX data migrations, I recommended that you should unit test the function with an external program and I mentioned that I wrote my test harnesses in Python. Today I thought I’d show some examples of what that looks like.

Some notes:

  • The goal is just to show what sorts of things can be done and how easily they are to do. I use Python because it helps me get more done with less effort.
  • I’m not going to spend much time explaining the code. If you’re interested in sort of thing, the python documentation is quite good.
  • If this post seems to generate some interest I’ll be likely to do more on the topic. If not, I’ll be somewhat less likely to revisit the topic soon.
  • These examples are written for Python 3.2.
  • For a development environment I use Eclipse with the Pydev plugin.

If you’re sitting there saying, Oh, Ruby is much better than Python for this sort of thing, or, I can do the same thing in only three lines of Perl!, fine. Write your own article. I have no interest into getting into a whose-language-is-better holy war. Python works for me. If this article spurs you to find a way to do the same thing in your favorite language, then I think it will have done some good.

Python Unit Testing

First up, here’s what a python unit testing harness looks like:

import unittest
import os
import ctypes
 
MAX_FSPEC_SIZE        = 256
 
class TestAutotranslate(unittest.TestCase):
 
    dll_path = r'C:\your\dlls\dir'
 
    # Include NX runtime libraries in %PATH% so DLL can find them
    os.environ['PATH'] = os.pathsep.join([os.environ['UGII_ROOT_DIR'], 
                                          os.environ['PATH']])
 
    dll = ctypes.CDLL(dll_path)
 
    # create c-compatible string arrays to hold the input and output
    input = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 )
    output = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 )   
 
    def check_trans(self, input, expected_output):
        # assign the input to the input buffer,
        # encode() is unicode stuff, (char to bytes)
        self.input.value = input.encode() 
 
        # call the custom autotranslate function
        self.dll.plmdojo_autotranslate(self.input, self.output)
 
        # Verify the results
        # decode() is unicode stuff, (bytes to char)
        self.assertEqual(self.output.value.decode(), expected_output)
 
    def test_basic(self):
        self.check_trans('101-000-001_01.prt', '@DB/101-000-001/01')
 
    def test_prefixed(self):
        "Test trimming off an unnecessary prefix"
        self.check_trans('some_prefix.101-000-001_01.prt', 
                         '@DB/101-000-001/01')
 
    def test_underscores(self):
        "Test converting underscores to dashes"
        self.check_trans('101_000_001_01.prt', '@DB/101-000-001/01')

The main things to mention are that ctypes is the module that lets you load shared libraries and call the C compatible functions which they export, and any method whose name begins with “test” in a class which subclasses unittest.TestCase is a single test case. IDE’s like Pydev will let you execute the test cases defined in a source file directly from the interface.

Analyzing Existing Data

I also mentioned in the previous post that it was a useful exercise to pass all of your file names into a program that called your autotranslate function. That way you could see what it handled and what it didn’t and refine both autotranslate and the unit test harness.

The following code demonstrates how that can be done. The assumption is that the input files list one file name per line. The output is then written to a CSV file.

import os
import csv
import ctypes
import string
 
MAX_FSPEC_SIZE = 256
 
def do_it():
 
    dll_path = r'C:\your\dll\dir\plmdojo_autotranslate.dll'
    os.environ['PATH'] = os.pathsep.join([os.environ['UGII_ROOT_DIR'], 
                                          os.environ['PATH']])
 
    dll = ctypes.CDLL(os.path.join(dll_dir, dll_name) )
    autotranslate = dll.plmdojo_autotranslate
 
    with open('data/autotranslate_report.csv', 'w') as outfile:
        csv_writer = csv.writer(outfile, lineterminator="\n")
 
        # Write out a header row
        csv_writer.writerow( ["Filename", "CLI", "item ID", "rev ID",
                          "Item ID Pattern", "Rev ID Pattern"] )
 
        # scan returns a row of results from translating each file name 
        # one at a time. As results come back, csv_writer writes each 
        # to the csv file
        csv_writer.writerows( scan('data/parts.txt', autotranslate) )
        csv_writer.writerows( scan('data/library_parts.txt', 
                                    autotranslate))
    print("All Done")
 
 
 
def scan(filename, autotranslate_function):   
    input = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 )
    output = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 )   
 
    # translate 0-9 --> 'n', whitespace --> '~'
    itemid_trans_table = str.maketrans( string.digits
                                        + string.whitespace,
                                        'n'*10
                                        + '~'*len(string.whitespace))
 
    # translate 0-9 --> 'n', whitespace --> '~', A-Z --> 'a'
    revid_trans_table = str.maketrans( string.digits
                                       + string.ascii_uppercase
                                       + string.whitespace,
                                       'n'*10
                                       + 'a'*26
                                       + '~'*len(string.whitespace))
 
    with open(filename) as listing:
        # read each line of the input file
        for filename in listing: 
            # trim leading and trailing whitespace
            filename = filename.strip() 
 
            input.value = filename.encode()
            # perform the translation
            autotranslate_function(input, output)
 
            cli = output.value.decode() # unicode bytes to char
 
            _, item_id, rev_id = cli.split('/')
 
            # convert the actual IDs to generic patterns so we can
            # more easily group the common cases together and see
            # what the uncommon cases are:
            item_id_pattern = item_id.translate(itemid_trans_table)
            rev_id_pattern = rev_id.translate(revid_trans_table)
 
            # oooo... a generator
            yield [d, cli, item_id, rev_id, 
                   item_id_pattern, rev_id_pattern]
 
if __name__ == '__main__':
    do_it()

Closing Thoughts

For me, what it comes down to is that using a higher level language like Python helps me accomplish things, like thorough unit testing or extended data analysis, that I probably wouldn’t do at all if I restricted myself to programming in only C or C++. The reasons for that include the fact that Python comes with a lot of libraries that make a lot of tedious tasks much easier, and that I simply enjoy programming in Python, and when I’m enjoying my work I tend to work harder at it.

If you don’t already have a favorite high-level language or if you’re just curious, I’d highly recommend taking a look at what Python can help you accomplish. If you do already have a favorite but you’re not using for this kind of thing, go do some research and find out if you can and then come back and share your results.

The post Using Python to Unit Test NX Open (or ITK) C Code appeared first on The PLM Dojo.

Ok, Cancel the Ok and Cancel Buttons

$
0
0

I have a simple request to make of developers who design dialog boxes for the applications I use: stop forcing me to translate every decision you ask me to make into a choice between Ok and Cancel.

Since this is nominally a Teamcenter blog I’ll pick on their BMIDE first. This is the dialog you get if you attempt to reload a project that has unsaved changes:

Now, I’m reasonably smart; my lips don’t usually move when I read. But I really have to slow down and read messages like this a couple of times to make sure I don’t click the wrong button. What is it with having to translate ever decision into Ok and Cancel? Is their a tax on using more than six letters on a button? How about having us choose from, oh I dunno, buttons that are actually labeled with the actual options? Is it really so hard to do something like this:

Doesn’t this make it more clear what will happen if I click a particular button? And notice that I added a real, don’t do anything! option — which is what I would normally expect cancel to do.

So please, before you design a dialog box with a bunch of complicated instructions about when to click Ok and when to click Cancel, why don’t you try to replace the default labels with a short description of what the buttons themselves actually do?

Microsoft’s Turn

Here’s another example that could use some rework. This is the dialog you get if you ask Excel to save a CSV (comma separated value) file:

I mean, sheesh, c’mon. I’ve been working with CSV files for years and I still have to read that carefully each time.

One more thing…

Something else that annoys me: Web developers who force me to enter my debit card number without any spaces. Because they couldn’t spend five minutes figuring out how to deal with spaces it’s now ten times harder to verify the digits are correct before clicking Buy. Seriously, what crappy language are you using that makes it so dang hard to strip spaces out of a string of input?

The post Ok, Cancel the Ok and Cancel Buttons appeared first on The PLM Dojo.


An SQL Query for Inventorying Teamcenter Objects

$
0
0

As part of an upgrade/data-model conversion we have going on where I work, the application engineers from Siemens asked us if we had a way of inventorying Teamcenter so we could know how many items of each item type we have. So I came up with the following SQL query:

SQL Query for Inventorying Teamcenter

SELECT
  w.pobject_type, COUNT(*)
 
FROM
  pitem i, pworkspaceobject w
 
WHERE
  i.puid = w.puid
 
GROUP BY
  w.pobject_type
 
ORDER BY
  w.pobject_type
;

I ran this against a Teamcenter Engineering 2007 system running on Oracle. I can’t promise that it’ll work for Teamcenter 8, but I expect it will — if you try it, please let me know your results in the comments below. If you’re not on Oracle you may need to adjust the SQL a bit.

It should be easy to modify the query to inventory other types of objects, for example dataset types, by selecting from a different table besides pitem.

By the way, if you are using Oracle I recommend SQL Developer, which is a free download from Oracle.

The post An SQL Query for Inventorying Teamcenter Objects appeared first on The PLM Dojo.

Hire an ITK Programmer, Save Money

$
0
0

A little while ago I was talking to one of my colleagues who also supports Teamcenter. Mott (not his real name) told me that he is not allowed to use any ITK customizations at all. The program manager has said that everything should be done purely by codeless configuration. Coding costs more money and could easily blow the project’s budget. The thing was that this limitation meant that Mott was forced to put together some God-awful workarounds to comply with the business requirements.

I asked him, so you mean to tell me that they’re forcing you to deliver a solution that is way too complicated and does a half-assed job of implementing their business requirements because they’re afraid to invest in a little bit of code that would get them a system that does exactly what they want?!?.

For some reason, Mott was offended by my description of his project.

ITK means Elegance and Correctness

I understand where the project managers were coming from. Programming can cost more up front. But I think that taking a hardline position forbidding any customizations is shortsighted. I do think that whenever possible we should avoid writing new code to configure the system. But well done custom code can mean the difference between an elegant system that does exactly what is required versus one that is complicated and doesn’t do what is required. The money saved up front is lost later by having a more inefficient and complicated system.

ITK Is Usually not Very Complicated Code

In my experience most ITK customizations are actually very simple pieces of code — a pre-condition that does a quick validation before allowing the user to proceed, a post-action that does a small bit of cleanup here or there, or a runtime property that displays useful information to the user in an easy to understand format. The main exception I’ve seen where ITK coding can be complicated is when trying to interface with external systems, and that has more to do with the requirements of the external system than with Teamcenter.

Of Course, You Need To Know What You’re Doing

A valid reason to avoid using custom code is not having anyone available who really understands how to efficiently customize Teamcenter with ITK. It’s not something you pick up in a five day training class.

Yes, I am Biased

I am a programmer myself. I have a financial incentive to ensure that my customers will allow ITK customizations. But that doesn’t mean I’m wrong either.

It seems that everyone wants to redesign the interface. I will go to great lengths to talk people out of customizing the rich client interface with Java code. That kind of customization is much more difficult to do successfully, harder to maintain, and of a much lower value to the customer — in my opinion. But I am also much more comfortable writing ITK code that eclipse framework code. So, yes, I am biased. But I still think I’m right.

Tell Me How and Why I’m Wrong

What do you all think?
Do you work with Teamcenter instances that do not use any customizations at all? How well do they work for you?
Do you forbid your team Using ITK?

The post Hire an ITK Programmer, Save Money appeared first on The PLM Dojo.

Ninja Updates: Make Modifications Without Leaving a Trace

$
0
0

Sometimes you need to make changes to Teamcenter data without leaving any trace that you were there. A simple ITK program could do the update, but it will also change the Last Modified Date and Last Modifying User properties in the process. That is, unless you do the update in ninja mode and leave no trace behind.

When You Need to Be A Ninja

Sometimes there are valid reasons to make an update without updating the last modified date and user. For example, I recently wrote a short program to populate some new attributes with data from another system. Since the core data hasn’t actually changed — it’s only been synchronized with the external system — I didn’t want to change the last modifying user and date fields. Knowing who actually worked on something last, and when, is often useful information to have. It wouldn’t be very useful if all the data were suddenly updated to say that I had been the last person to work on it.

Not the Ninja Way

Now I know that those last-modified attributes are just fields that I can get and set with POM or AOM functions, so my original plan was to do something like:

# Pseudo-code -- not real function names!
 
# store current values
original_user = get_last_modifying_user() 
original_date = get_last_modified_date() 
 
# Updates last modifying user and date:
fill_in_new_attributes()
 
# Restore original values:
set_last_modifying_user(original_user)
set_last_modifying_date(original_date)

But then as I was perusing the documentation for the POM_ library I found a better way…

The Way of the Ninja

Here’s how to turn on Ninja mode:

POM_set_env_info(POM_bypass_attr_update, FALSE, 0, 0, 0, "")

POM_set_env_info can do a lot of things — so read the docs! — but the thing I was interested in was it’s ability to temporarily disable updating last modifying user and last modify dates when instances are saved. And by golly, it worked.

Warning

I don’t think I need to spend any time describing how ninja mode could be horribly misused out of either ignorance or malice. And it can be particularly misused by someone with a DBA account who can turn on bypass mode. So be careful who gets those DBA accounts, ‘kay?

The post Ninja Updates: Make Modifications Without Leaving a Trace appeared first on The PLM Dojo.

How To Compare and Sort Revision IDs

$
0
0

One piece of code I find myself writing over and over is a function that compares two revision IDs to decide which one is the higher rev. The revision sequence we use is too complicated for a standard lexicographical comparison to work. I’ve found several poor ways of implementing it. I think I’ve finally found a decent solution. Perhaps it’s something that will help some of you.

When Do I need to Compare and Sort?

Here are a couple of examples of where I’ve had to do a comparison of revision IDs.

1. Checking that revisions are created in order

Our rules for CAD models allows revisions to be skipped — typically when multiple RNs are being incorporated at once. What we don’t allow is for users to go back and create a lower rev than the latest. So, A → C → E is fine, skipping revs B and D, but A → C → B is not okay. Once C has been created the users can’t go back and create rev B.

We prevent this with a custom pre-condition on item rev creation that takes the existing rev IDs, sorts them in order, and then verifies that the proposed rev ID for the newly created revision is greater than the last revision in the sorted list.

It’s true that there is a ITEM_ask_latest_rev() function, but that looks at the creation date to determine what the latest rev is and I don’t entirely trust creation dates. Revs could have been created out of order before we implemented the pre-condition that checks for that or during a poorly handled data migration.

2. During data migrations

The other common reason this comes up is data migrations. When migrating data into TC I want to check if each component I’m importing is newer than what’s currently in Teamcenter or not. If it is, I want to migrate it, if it isn’t, I don’t want to migrate it and I want the migrated assembly to use the latest rev that’s already in TC.

What’s So Hard About that?

It’s difficult because of the revision sequence we’re using. If your sequence is simple then you may not have any problems.

The revision sequence we use is,

  1. Numeric revs (01–99) for preliminary work
  2. Revision “dash” — a literal “-” character, for the initial release.
  3. Single character Alpha revisions, A–Y, for approved changes (note that “Z” is an illegal character for revisions)
  4. Two digit Alpha revisions, AA–YY, for when we run out of single digit alpha revisions

If you did a simple text based comparison of the revision IDs it would almost work, but not quite. Rev “-” compares as less than both the numeric and the alpha revisions, and the one and two digit alpha revs don’t compare correctly:

revs = ["-", "91", "01", "AA", "B"]
sorted(revs) == ["01", "99", "-", "B", "AA"]
# Correct sorting

But what we get is,

sorted(revs) == ["-", "01", "99", "AA", "B"]
# Incorrect sorting
# The ASCII value for '-' is 45 while the ASCII value or '0' is 48
# so '-' comes before '01'.
# "AA" comes before "B", alphabetically.

On top of that, legacy alpha revisions were zero-padded to always have two characters, “0A” instead of “A”. That causes more problems. And then on top of that rev “-” used to be entered into Teamcenter (and nowhere else) as “00″ (please don’t ask why).

So the correct sorting would be,

revs = ["01", "99", "00", "-", "0A", "A", "0B", "B", "Y", "AA", "YY"]
sorted(revs) == ["01", "99", "00", "-", "0A", "A", "0B", "B", "Y", "AA", "YY"]
# Correct sorting

But instead we would get,

sorted(revs) == ['-', '00', '01', '0A', '0B', '99', 'A', 'AA', 'B', 'Y', 'YY']
# Naive and incorrect sorting.

A Revision Comparison and Sorting Recipe

The process I follow can be called, normalize, decorate, sort

1. Normalize

The first step is to get a normalized version of the rev IDs:

string normalize(const string &rev_id);
// normalize("0A") == "A"
// normalize("00") == "-"
// normalize("A") == "A" // unchanged by the normalization

This function converts rev IDs to a common format for the comparisons so we don’t have to insert all sorts of special handling code into our actual comparisons. I’ll leave it to you to figure out the implementation (it’s not terribly interesting — you’ll probably end up using isalpha() and isdigit() a lot).

2. Decorate each rev ID with its rev type

First, define an enum that lists the different types of rev IDs in order:

typedef enum
{
    NUMERIC,
    DASH,
    ALPHA, // one digit alpha, A, B, C, etc.
    DOUBLEALPHA // two digit alpha, AA, AB, etc.
} rev_type_t;

Next, define a function that looks at a rev ID and returns its type:

rev_type_t get_rev_type(const string &normalized_rev);
// get_rev_type("01") == NUMERIC
// get_rev_type("-") == DASH
// get_rev_type("A") == ALPHA
// get_rev_type("AA") == DOUBLEALPHA

Again, I’ll leave the implementation as an exercise.

Finally, combine (decorate) each rev ID with its type:

#include <utility> // std::pair<>, std::make_pair()
 
std::pair<rev_type_t, string> decorate_revid(const string &rev_id)
{
    const string normalized_rev = normalize(rev_id);
    const rev_type_t rev_type = get_rev_type(normalized_rev);
 
    return std::make_pair(rev_type, normalized_rev);
}
// decorate_revid("01") == pair(NUMERIC, "01")
// decorate_revid("-")  == pair(DASH, "-")
// decorate_revid("00") == pair(DASH, "-")  // "00" normalized to "-"
// decorate_revid("A")  == pair(ALPHA, "A")
// decorate_revid("0B") == pair(ALPHA, "B") // "0B" normalized to "B"
// decorate_revid("AA") == pair(DOUBLEALPHA, "AA)

3. Compare and Sort

Now that you’ve decorated the rev IDs you can compare them reliably. Instead of comparing solely by the revision IDs themselves, or even their normalized forms, we’re now comparing the objects of type pair<rev_type_t, string>. The first element of each pair, rev_type_t will ensure that the relative classes of revision IDs compare correctly relative to each other. All Numerics will be less than all Dashes which will be less than all single-digit alphas which will be less than all two-digit alphas. And the normalized revision ID in the second element will ensure that within each type the rev IDs sort correctly.

// given two rev IDs return the greater.
// If they are equivalent, return the first argument
// (preserve weak ordering)
//     max_revid("0A", "A") == "0A"
//     max_revid("A", "0A") == "A"
string max_revid(const string &first_rev, const string &second_rev)
{
    if( decorate_revid(first_rev) >= decorate_revid(second_rev) )
    {
        return first_rev;
    }
    return second_rev;
}

And once you can compare them, you can sort them:

#import <algorithm> // std::sort()
#import <vector>
 
// A comparison function for use by std::sort(). 
// Returns true if the first rev should be sorted before the second, 
//   false otherwise
bool compare_revids(const string &first_rev, const string &second_rev)
{
    return( max_revid(first_rev, second_rev) == second_rev );
}
 
string unsorted_revids[] = {"0A", "99", "AA", "A", "-", "B", "00", "01"}
vector<string> revids(unsorted_revids, unsorted_revids + 8)
 
std::sort(revids.begin(), revids.end(), compare_revids);
// revids now sorted: ["01", "99", "-", "00", "0A", "A", "B", "AA"]

Usage

Now that I have max_revid() and compare_revid() I can do implement my revision ordering pre-condition easily:

  1. Get all current revision IDs for the item
  2. Sort them in order using std::sort() with compare_revids() as the compare function
  3. Compare the last revision in the sorted list of rev IDs to the rev ID for the new revision using max_revid(). If the new rev ID isn’t greater than the current rev, return an error code.

The post How To Compare and Sort Revision IDs appeared first on The PLM Dojo.

Killing Memory Errors with the STL

$
0
0

Pull up a chair and let me tell you a story from my early days as a professional programmer. It’s about how I screwed up, and what I’ve done since then to make sure that mistake is never repeated.

I’m going to ramble for a bit but I promise that I’ll get to the point eventually.

One of my first big tasks as a programmer was to update, in preparation for an upgrade, some iMan and Unigraphics code I inherited. For you younger kids out there, iMan was the predecessor of Teamcenter Engineering and Unigraphics later bought IDEAS and became NX.

The code was a mess. Of course, every programmer always thinks that code done by someone else is a mess. But this really was. There were single functions that would have taken two dozen sheets of paper to print out — double sided. My favorite was a family of functions that instead of returning values or modifying a parameter via a pointer updated fields in a global array. One function would update element 0, another would update element 1, etc. And then other functions would know which element to read. But that’s besides the point.

This code was prone to unrepeatable memory crashes. Memory errors are like that. One function would allocate memory, typically for a string or arrays, and then pass the pointers back to their callers. The callers would be responsible for freeing the memory — unless they passed the allocated pointers to their callers who would then have the responsibility, and so on.

If you’re familiar with this type of code you know that it is error prone. Freed memory is read, allocated memory is allocated again, etc. These types of bugs can be hard to track down. Sometimes there’s a problem in an execution path that’s rarely taken. Or sometimes the pointer will still point to valid data so long as the OS hasn’t seen fit to reuse that space already. Nine times out of ten the code will seem to work fine, and then on the tenth try the OS will actually use that address for something else and the program crashes.

Whack a MoleI didn’t care much for this type of code, but it would have taken a massive overhaul to make any substantial change. Being new on the job and new to this code base I was reluctant to make too many changes to it. So we tested the code until we found a problem and then I’d hunt it down and try to fix it, and then we’d repeat the process.

Over and over and over and over. Think, whack-a-mole.

Eventually we couldn’t produce any more errors so we decided it was finally ready to release.

So we went ahead with the upgrade. And then… (cue dramatic music) …nothing much happened. The upgrade went about as well as upgrades ever do. There were some snafus here and there, but no show stoppers.

So a week later I left for my first PLM World users group conference.

And then all hell broke loose.

Hitting Bottom

On the first day of the conference we start getting calls from the home office. The drafting department is at a standstill. No drawings can be released. Programs are screaming about delays and imminent deadlines. Oh, and how’s the conference?
Not my best week to be sure.

The Recovery Process

When I got back to the office I knew I had to do something dramatic. I had tried the whack-a-mole approach for too long. I might have be able to fix the cause of the current complaints but I had no confidence that another bug wouldn’t be uncovered in another week. I couldn’t continue like that.

I resolved that I would eliminate all of the memory errors from the code once and for all. Doing so required the massive overhaul I had been afraid to undertake earlier. Now I was more afraid of releasing another buggy version of the code.

I set two goals for my code:

  1. The code had to work correctly.
  2. See goal number one.

Admitting I Have a Problem

Fixing this mess was a bit like starting a Twelve Step Progrom.
The first step was to admit I had a problem:

I am not smart enough to manage memory myself.

The first step was to be modest about my programming abilities. Given my recent failure, this wasn’t difficult.

Many times we programmers want to show how incredible our skills are. So we do wild and crazy things in our code that might eke out some extra bit of performance or optimize memory just a bit more or… or… something. More power to you if you can pull that off, but I’m not that good of a programmer.

From admitting I wasn’t very good at dealing with memory management came the obvious solution to my problem: I should not manage any memory.

Okay, fine, but short of hiring an assistant to write my code for me, how do I do that?

Turning to the Standard Template Library

After admitting you have a problem, the second big step in twelve-step programs is to turn to a higher power for assistance. For a C programmer who needs to manage memory correctly, that higher power is C++ and the Standard Template Library (STL).

Now the STL is vast, but there were really only two things I needed to use from it, the std::string class and the std::vector class template.

char*std::string

The first major area of my code where I was trying to allocate memory was string manipulation.

void work_with_c_string(const char* input_c_string)
{
	char *c_str = NULL;
	size_t len = strlen(input_c_string);
	c_str = (char *)malloc(len + 1);
	strcpy(c_str, input_c_string);
 
	// do whatever…
 
	free(c_str);
	return;
}

Now, simple examples look simple, but real code gets messy in a hurry, especially when the malloc() and free() are in different functions. The std::string class deals with all of that internally though. It will allocate enough memory to store its contents, and free that memory when the string finally goes out of scope.

void work_with_cpp_string(const &input_cpp_string)
{
	// memory allocated and copy made automatically
	string cpp_string(input_cpp_string); 
 
	// adding a suffix -- memory resized automatically
	cpp_string += ".foobar";
 
	// whatever…
 
	return; // memory for cpp_string automatically released
}

For the record, notice that I did more in three lines of C++ code than I had done in five lines of C code. That adds up.

Passing std::string to C Functions

But wait, the ITK libraries expect plain old char* strings as input, right? Fortunately for us, std::string has a member function, c_str() that returns a char* representation of the string.

	const string value("my new value");
	AOM_set_value_string(object_tag, "my_property", value.c_str() );

Accepting allocated char* from the API

We can’t entirely escape managing memory for C strings. The ITK API has many functions which return a char* which you are then expected to free. My approach to avoiding problems with those strings was bluntly simple.

  1. Initialize a new std::string from the char* string.
  2. Immediately free the char* string.
  3. Do all work with the copy.

Is this overkill sometimes? Probably, but I’ve found that by not trying to be clever and figure out when it was necessary to do and when it wasn’t I saved myself a lot of trouble later on.

	char* temp = NULL;
	AOM_ask_value_string(object_tag, "my_property", &temp);
 
	// copy char* to std::string
	const value( temp );
 
	// free char* string
	MEM_free( temp );
 
	// work with std::string…

dynamic arrays → std::vector<>

The second big area that accounted for most of my memory management needs was building and using dynamic arrays to store lists of items. Again, the STL has an alternative that will deal with the memory management for you, the std::vector<> class template.

Like string, vector will automatically allocate enough memory to store its contents, reallocate as new contents are added, and free its memory when it goes out of scope.

	vector<string> str_vec; // a vector of strings
	str_vec.push_back("first");
	str_vec.push_back("second");
	str_vec.push_back("third");

Passing vectors to C-functions taking arrays

Also, like string, vectors can be passed to C functions that expect regular arrays. The format may look a little odd at first. There isn’t a member function, like c_str(), to call. Instead you use the fact that the contents of a vector are guaranteed to be in contiguous memory, just as if they were in an array. The value of a standard array variable is the address of the memory block holding the array’s first element. So to pass a vector as an array you need to pass the memory address of the vector’s first element: &my_vector[0], where my_vector[0] gives you the first element, and then & gives you its address.

Typically functions that take arrays also need to know how long the array is and take that as separate parameter. You can pass that value by using the length() member function.

See the example below.

Accepting arrays from the API

As with strings, when the API returns an array that I am expected to free I immediately copy it into a vector, free the memory, and then work with the vector.

Example using vector

	int *int_array = NULL;
	int array_size = 0;
 
	AOM_ask_value_ints(my_object_tag, "my_property", 
                           &array_size, &int_array);
 
	// initialize vector with contents of array 
        // using the iterator constructor
	vector<int> int_vector(int_array, 
                               int_array // pointer arithmetic!
                               + sizeof(int_array) / sizeof(int) 
                              );
 
	// Free array
	MEM_free(int_array);
 
	// add a value to the vector
	int_vector.push_back(999);
 
	// pass to c-function API
	AOM_set_value_ints(another_object_tag, "my_property", 
                           int_vector.length(), &int_vector[0]
                          );

A Note on Performance

Earlier I said that I decided that above all other things my code had to work correctly.

The corollary to that was that I did not have the following goals:

  1. To write the fastest possible code.
  2. To write the most memory-efficient code.

The reasons I bring this up is because some people might complain that the STL isn’t as efficient as pure C. My response is, yeah. So what?

The typical sorts of things my ITK code does are to check some preconditions when creating an item or perform some task during a workflow task. During a typical day they might be executed a dozen times or less by most users. If the prefect implementation takes half a second to run and my implementation takes three, I may be 600% slower, but honestly, will the user notice much? No, they really won’t. But if my code blows up and kills their session, hoo-boy, I will definitely hear about it.

Back to My Story

I decided on change the code to use strings and vectors instead of char* and arrays. I’d change one function at a time, attempt to recompile, and see what other functions couldn’t compile. Then I’d track those functions down and change them, and so on. This went on for a week of late nights, with managers stopping by daily to check on my progress. Can’t we make a quick fix? Frankly, I largely blew them off. I was convinced that the only true solution was what the radical overhaul solution. If they had had anyone else they thought could have fixed the code I’d probably have been out of a job, or at least reassigned elsewhere. Thankfully, they didn’t.

After a week of this I finally had a new version of the code to test.

It was put into production.

The memory errors were gone.

It has been years now. I’ve found plenty of other things to get wrong in the code, but not memory errors. The memory errors are gone.

Coda

A tool designer I once worked with shared a quote with me once that has always stuck with me. I’ll paraphrase his paraphrasing:

It is one thing to create something that can’t obviously fail.
It is another to create something that obviously can’t fail.

– Anonymous

(If anyone can tell me the original quote and author I’d be very grateful. I have consulted the oracle at Google with no luck.)

Code that passes allocated memory around between functions and hands off responsibility for freeing the memory will, at best, be code that can’t obviously fail. It will never be code that obviously can’t fail.

Using the STL puts me closer to writing code that obviously can’t fail.

Resources

There are two books that I have found invaluable for learning to write better C++ code.

The first is Scott Meyers Effective C++, and the second was C++ Coding Standards by Sutter and Alexandrescu (disclosure: affiliate links). I highly recommend both books if you’re going to be doing any C++ programming. One small disclaimer, I actually have an older edition of the Meyer’s book.

The post Killing Memory Errors with the STL appeared first on The PLM Dojo.

Learn to be a Teamcenter Developer

$
0
0

A question I get with some regularity is some variation of, “How do I get started as a Teamcenter ITK developer?”, or, “…as a Teamcenter customizer?” It’s gotten to the point where I either have to: start ignoring all of those emails. Write a post that I can point people to which answers the question […]

The post Learn to be a Teamcenter Developer appeared first on The PLM Dojo.

What’s the Difference between Server and User Exits?


Using Property Operations

$
0
0

I recently got to try out a new tool from the Teamcenter developers toolbox that I’ve never used before, property operations. I think I’m going to be using them again. If you haven’t used property operations before, take a look and see if you can find a way to use them. The Problem We have […]

The post Using Property Operations appeared first on The PLM Dojo.

Viewing all 16 articles
Browse latest View live