Must-Haves for Decentralized Apps

Whether you’re a developer or a user, these are the requirements for a truly decentralized app. If it lacks any of these, your app can (and should be assumed that it WILL) be censored:

  1. No reliance on legacy DNS.

    1. While you CAN make use of DNS as an additional measure, your app should still fully function even if the entire DNS system is compromised and/or your domain name confiscated.  You should think of the DNS as only a gateway for legacy users to find your services.
  2. No reliance on a centralized account creation system.

    1. User accounts should be created client side ONLY, like a cryptocurrency wallet. The app’s concern with the user account should ONLY be that the user cryptographicly signs their communication with you, using their private key and you use their public key to transmit private data from you to them.
  3. Deployment of the app should NOT depend on a centralized app publisher.

    1. The app should be obtainable if you or your company or your organization cease to exist. This does not mean that you can’t ALSO deploy to centralized app stores, but those should be SECONDARY. You should also dissuade your users away from centralized app stores.
  4. User’s personal data should ONLY be stored on their own device

    1. OR encrypted with their public key before being stored remotely to their choice of external storage.
  5. Remote storage

    1. All remote storage should be stored on a decentralized storage platform (The user’s SiaCoin or FileCoin accounts, for example. For published data, IPFS and/or a blockchain). This doesn’t mean you can’t also make use of centralized platforms. In fact, make use of popular centralized cloud storage like Amazon S3, DropBox, Google Drive, etc, but encourage the user to add 3 of those to their storage preferences and you encrypt their data locally, with their public key, then replicate it, like RAID 3, across at least 3 or more centralized storage platforms.
  6. Monetization

    1. Creator monetization should NOT be controlled by the app creator. The app creator should only facilitate code in their app to allow independent users to pay, directly, to each other, using a system outside the control of the app creator (such as cryptocurrencies).

Speaking of Decentralized Monetization,

If you like my work, you can contribute directly to me with the following cryptocurrencies:

BitCoin:

bc1qx6egntacpaqzvy95n90hgsu9ch68zx8wl0ydqg
bc1qx6egntacpaqzvy95n90hgsu9ch68zx8wl0ydqg

LiteCoin:

LXgiodbvY5jJCxc6o2hmkRF131npBUqq1r
LXgiodbvY5jJCxc6o2hmkRF131npBUqq1r

Tensorflow, Python, & NVidia CUDA Setup

If you’re trying to get started with Machine Learning using Tensorflow, you’ll likely experience frustration trying to find the right version of Tensorflow, Python, & NVidia CUDA drivers that all work together.

Having just gone through that frustration myself, I present to you a WORKING set of instructions.

NVidia CUDA

This part is NOT REQUIRED, unless you want to use your GPU for MUCH faster Tensorflow program execution.  You DO want to use your GPU, BTW!

As of this writing, CUDA 9.2 is the latest version, however, Tensorflow will not work with anything later than 9.0, so go here to download CUDA 9.0:

https://developer.nvidia.com/cuda-90-download-archive

If you don’t have an NVidia GPU, click here to get one…

NVidia GPUs on Amazon.com
NVidia GPUs on Amazon.com

What is CUDA?

CUDA is software to allow you (or other programs written by other people) to write software to utilize your video card’s GPU (Graphic Processing Unit).  A GPU is hardware designed specifically for video operations that are many times faster than a CPU can do it.  Turns out, you can use your GPU for some specific types of calculations that have nothing to do with graphics and speed up those operations… like… a Neural Network like TensorFlow.  They’re also good for cryptomining, but we won’t get into that in THIS article.

Tensorflow

Once you have CUDA installed (assuming you have an NVidia GPU and want to take advantage of the massive speeds it’ll give you compared to just running Tensorflow on your CPU), it’s time to install Tensorflow.

Follow these instructions:

https://www.tensorflow.org/install/install_windows

They’ll also get you up and going with your first “Hello World!” program… after you get Python installed (next section).

Python

There are multiple versions and flavors of Python out there.  THIS is the one that will work with the version of Tensorflow and CUDA listed above:

https://www.python.org/downloads/release/python-362/

Once you have them all installed, follow the tensorflow tutorial on the tensorflow link above.

That’s it!

Extra

Here’s an easy to use Python play site where you can write and test Python code as you learn it without installing anything!

https://www.tutorialspoint.com/execute_python_online.php

GIT For Beginners

Target Audience

Programmers that need a good source code repository and versioning system.

Expected Knowledge Level:

Beginner through Advanced. You do not necessarily have to have experience with other version control systems, but it helps, of course. Your knowledge of programming is of minimal importance to this article. But if you’re reading this, you’re most likely a programmer, and that’s all that really matters.

Purpose of this article:

To give you a head start with Git. This is not a complete tutorial. This will give you critical pieces of information that are usually lacking in other documentation that experienced GIT users forget that non Git users don’t already know.

What IS Git?

Git is a source code repository and versioning system. It’s free and open source. It lets you keep track of your source code projects, have them backed up on zero or more remote storage locations, share your source code (if you want), keep track of versions of your source code, branch from your source code to work on special features without interfering with the main branch, merge branches together, provide opportunities to review source before merging it back into an important branch (for teams), allows teams of programmers to easily work on the same project without undue burdens of coordination and synchronization.

What Problems is GIT a Solution For? (Why GIT?)

First, let’s answer what version control systems, in general, solve, not just GIT:

  • Provides a backup for your source code.
  • Allows collaboration with other programmers.
  • Allows keeping track of versions of your source.
  • Allows branching and/or forking of your source to work on specific features or bugs or experimental releases without contaminating the main source branch.
  • Replication of your source for safety.
  • Many other reasons.

So, why GIT in particular? I’m not an advocate for GIT in particular. I like it and I use it. What’s important is that you’re using a modern source code control system and have policies in place to prevent problems and provide standardized solutions. GIT is one of many solutions. However, GIT has risen in popularity and seems to be the defacto go-to source control software these days. And there’s good reason for that. It was created by Linus Torvalds (the creator of Linux) and is actively maintained. GitHub.com, arguably the most popular source code repo on the planet is based on GIT. And like most source control systems, GIT is multi-platform.

Again, I’m not advocating for GIT. I’m writing a quick-start guide with a little bit of background. I’ve written plenty of articles on subversion too. Note also that Mercurial is a Git derivitive, so pretty much everything I cover here applies to Mercurial as well.

Things You Need to Know:

GIT is not easy to get started with if you’re not familiar with it, and by definition, if you’re getting started with it, you’re NOT familiar with it. For one: GIT is not a single product. Since it’s open source, there are MANY products that are GIT compatible and you have options for command line, GUI, embedded into your favorites IDE or source editors, plus multiple server options as well.

1. Terminology

  • “Repo”: A managed database of a source code project. Unlike other source control solutions like Subversion, where a “repo” is a centralized database where you store all your projects, in GIT, a “repo” is where you store ONE source code project. For example, say you’re writing a game. You’d have a dedicated repo just for that game. On your local machine, you’ll have a complete repo folder named “.get” inside your primary source code folder.
    “Project”: A centralized server can host multiple software projects. Each project is generally set up for a single software application being worked on by programmers. Programmers will “clone” or “check out” the project to their local machine, creating a local “repo”.
  • “Check Out”: The process of retrieving source code from a branch in a repo. That repo could be a remote repo or your local repo.
  • “Clone”: Pretty much the same thing as “Check Out”. In other source code providers, “checking out” a project informs the server that you have it checked out. In GIT, the server is never aware of who has what and doesn’t care and doesn’t need to know. You’ll simply “clone” the project to get a local copy of the database and work on it locally, committing locally, then eventually push your changes back up.
  • “Check in”: This is not a term used in the world of GIT.
  • “Commit”: The act of submitting your local source code edits into your local repository.
  • “Push”: The act of sending all of your commits from one of your local repositories up to a remote server. If someone else committed and pushed code in on any of the same files you worked on, chances are you’ll have a conflict and will be forced to perform a merge.
  • “Merge”: The act of you being presented with two conflicting versions of the same source file. You’ll be asked to pick and choose which differing lines from both versions should be merged into a single file version before committing.
  • “Pull”: The act of you pulling down the latest changes from a remote repository into your local one.  Note that “pull” is in the direction of the machine in which the code is moving to.  Whoever triggers a pull, does so from the location of the machine in which the code moves to.  For example you “pull” from the server to your local machine.  You log onto the server’s web interface and request a “pull request” to move your code into the central repository.
  • “Pull Request”: The act of a programmer requesting that their committed and pushed changes be merged with a more important branch. One or more other programmers (frequently the project lead) will review your changes and decided whether or not to allow them to become part of the bigger project. You may be asked to make some minor changes and re-submit your pull request or it may be rejected out-right.

2. Storage

Unlike Subversion and the much older Microsoft Visual SourceSafe, you don’t have 1 server and multiple clients. Instead, GIT has no “real” central server. Though most people use it in a way that sets up one repo as the understood central repo.

You don’t simply check out from the server, edit, then check back in. Instead, your local machine, itself, becomes a server. You become a client to your own server. So, when you check out and commit your code, you’re doing it from and to your local repository. At any time, you can push all your commits from your local repo up to another repo. You can “pull” from a remote repo to yours to get yours up to date.

But while writing code, you’ll create branches locally in your own repo, then checkout from those local branches, edit, commit. You may do this many times. Eventually, you’ll want to push your changes up to the shared repo.

3. Branching

If you’ve ever tried branching in things like subversion, you’re probably aware of how difficult it is and how easy it is to screw things up badly.

SUBVERSION BRANCH: HOW TO

In GIT, it becomes ridiculously easy. It’s so easy, in fact, that branching will become your common, every day practice. Everything you do… every feature you add, every bug you fix, will be done in a branch.

In all fairness though, it’s still hard if you’re not using the right tools. If you’re a command-line junky (which I do not recommend, nor should anyone be impressed by someone insisting on sticking with the command-line), you can implement best-practices like GitFlow. Better yet, are plugins for GitFlow that are made for Visual Studio, GitKraken, and many other Git clients. This removes the complexity of branching and merging down to a couple of clicks and removes the human error component, making your workflow incredibly powerful and easy at the same time.

4. GitFlow

Make your life much less complicated. Start using the GitFlow best practice. Just because GIT supports branching, doesn’t mean that everyone’s going to do it the same, nor that everyone’s doing it “good”. What’s your policy on how code moves from developers to production? There are just about an infinite amount of hodge-podge plans using GIT to make that happen. GitFlow is a standardized way of doing it. In short (very short) explanation, here it is:

 

  • When you create your project, you create a “main” or “master” branch. The becomes the gold standard for finished, polished code. You will most likely build what’s in there and publish it.
  • Create a branch off of “master” called “develop”. This will be the main, working branch where programmers will branch from and merge back into. This isn’t necessarily the “best” version of the code, but it’ll be the “latest” version that all developers use as their developing silver standard.
  • If you are tasked with fixing a bug or creating a new feature, you’ll create a new branch derived from the develop branch. You’ll work on your fix or feature until done, then merge it back into develop.
  • Some coding shops like to have a “bug fixes” branch, a “features” branch, and “hot fixes” branch from the develop branch. Then the developers never branch directly from the “develop” branch. They’ll instead branch from one of those 3 branches.

Making this happen is a chore if you don’t have tools that are designed for this and you are likely to introduce big mistakes without using GitFlow tools. If you’re using Microsoft Visual Studio, go to the Extensions and search for GitFlow. Install that, then you can very very easily automatically create, pull, and work on a feature or bug or hot fix branch. Then when you’re done, you simply click “finish” and it’ll do all the committing, pushing, and merging for you (except for the merging where human intervention is required). Your F-Up rate will greatly decline and your co-workers will appreciate it!

If you’re using GitKraken, there’s a plugin for GitFlow there too. You can use both Visual Studio’s GitFlow and GitKraken’s GitFlow interchangeably, at the same time, on the same project.

No joke! Go get GitFlow now!

Resources/Tools:

  • The base GIT software:  https://git-scm.com/downloads
  • GIT Bash
  • GitFlow
  • Git Clients
    • Git GUIs
    • Inside Microsoft Visual Studio
      • VS directly supports GIT
      • Install the GitFlow extension.
    • Eclipse
    • Sublime
    • Android Studio
    • Stand-Alone clients
      • GitKraken
      • SourceTree
      • GitExtensions
      • Git Bash
  • GIT Servers
    • BitBucket.com
    • GitHub.com
    • VisualStudio.com

Thank you for sharing this article.  See this image?

image

You’ll find actual working versions of them at the top and bottom of this article. Please click the appropriate buttons in it to let your friends know about this article.

Error: “Interface name is not valid at this point”

If you ever get the Visual Studio error:  “Interface name is not valid at this point”, it’s a simple fix.  You have a simple typo.  See the example here:

InterfaceNameIsNotValidAtThisPoint

container.RegisterType<IUser, User);

Notice the closing parenthesis?  There’s no open parenthesis.  Notice the open angled bracket?  There’s no closing angled bracket.
Once you see that, the fix is obvious:  Replace the “)” with “>()”.

Thank you for sharing this article.  See this image?

image

You’ll find actual working versions of them at the top and bottom of this article. Please click the appropriate buttons in it to let your friends know about this article.

Extending Xamarin Forms

XamerinFormsPart2

This is Keith’s second part to his earlier session on Introduction to XAML Forms.

Below are my in-session notes:

  • JetBrains dotPeek is a Windows app to help with XAML.  Extremely valuable according to Keith.
  • Demo was in Xamarin Studio (on Mac).  A little more stable than Visual Studio 2015 right now.
  • When starting new project, you have check boxes for target platforms (iOS & Android).
  • UITests projected created for you, by default.
  • Be sure to get latest packages because they’re updated frequently.
  • Creating a new XAML form creates a XAML file and a C# code behind file.
  • Inside XAML <ContentPage>, type in your new controls.
  • He created an Audio Recorder class to record some audio.
  • He’ll be targeting iPhone for this demo.
  • Data binding with BindableProperty type:
    • public static BindableProperty fileNMeProperty = BindableProperty.Create(“FileName”, typeof(string));
    • public string FileName{ get{ return (string)this.GetValue(FileNameProperty);} set{this.SetValue(FileNameProperty, value);}
  • MessagingCenter class lets you communicate between the layers (I presume he means between the code behind layer and the XAML layer).
  • C# code that’s native to the target platform is auto-generated (I think).
  • He built and deployed his demo to his iPhone and recorded his voice.  We didn’t hear the playback, but he swears it played back.  Don’t worry, we trust you Keith. 🙂
  • He created a “renderer” for a platform specific feature (>> on list items on iOS).  It will not fail on other platforms, it just won’t show it.

Quick & Dirty TeamCity (from zero to CI in no time)

IMG_20150711_145030Attending CodeStock 2015, I attended the session “Quick & Dirty TeamCity (from zero to CI (continuous integration) in no time)”.  Below are my notes while in the session.

The session was presented by Joel Marshall @joelmdev.  Thanks for the excellent presentation Joel!

In short, this is using TeamCity (a product with a web front end that lets your developers deploy their apps and giving you the ability to control the building of their product, in addition to automatically running their unit tests and stopping the build (or the ability for another product to deploy) if any of the unit tests fail).  TeamCity can also monitor your source control provider to automatically detect new commits and build them.

IMG_20150711_153314

  • Why TeamCity?
    • Free version supports up to 20 build configurations.
  • Installed build agent and server.
  • Set port to 8080 on dev machine so it doesn’t interfere with local IIS.
  • Edit your firewall to make a rule to open port 8080 if you want others to connect (if you’re setting up on your dev box.  If you’re setting up on a real web server, you’ll be using port 80 and won’t need to open any ports).
  • He set it up to monitor a BitBucket Git repo.
  • He then set up a build configuration and had it retrieve and build source from his Git repo on BitBucket.
  • There are all sorts of things that can happen during build and  you can configure it to take different actions on the results of each build step.
  • Miraculously, Joel successfully configured everything and got it all working.  He said he was more surprised than we were that he got it all working during the live demo.

My Comments on TeamCity (unrelated to the session)

I’ve been using TeamCity for about three and a half years (as a user, not an administrator) with 2 different companies.  I highly recommend this, in addition to using OctoDeploy (also with both companies for the same time period… OctoDeploy can receive builds from TeamCity and deploy them to different environments).

Why TeamCity?

In a company environment with multiple developers, you really don’t want your developers just handing you compiled code, asking  you to deploy it.  You should be publishing compiled code created from source code you have and from YOU building it.

TeamCity builds a deployable product from source code it gets from your source control repository.  The builds are version tagged and connected to the version it came from in Source Control.

If there are any problems with the build or running of unit tests, it will log it and provide you and the developers information on the errors.

You can configure multiple builds for the same product.  For example, you can have one build configuration for a testing environment, another for a QA environment, and another for a production environment.  As you know, all environments usually connect to different databases and web service servers, etc…   TeamCity gives you tools to change the config files from the source to make it work properly with the environment it’s working with.  In Visual Studio, developers can create web.config transforms that provide dev, QA, test, & Prod version (or as many as you like) of their web.config files and TeamCity can automatically recognize them and use the appropriate ones.

Once the product is built successfully, you can use another product to deploy the built code to the proper servers.  OctoDeploy works great with TeamCity and can auto-detect builds from TeamCity and automatically deploy them, or hold them, waiting for permission to deploy to certain environments.

Every production shop should be using tools similar to these, if not these.  It saves so much time and effort and provides an audit trail of what was published and ability to easily roll back bad deployments.  It makes building and deploying a real “thing” as opposed to just some random developer making changes to a production server with no accountability.  As a developer, myself, I want this, so I can’t be blamed for taking down a production server.  I do NOT want access to the production servers.

In addition to all of the advantages above, if you have weird stuff you have to do with any particular build/deployment process, you can automate just about all of it.

Introduction to Xamarin

CodeStock 2015 is the biggest CodeStock, by almost double this years hosted at the Knoxville World’s Fair park Convention Center.  It’s our first year having it at this convention center.  Below are my notes on the intro to Xamarin Forms session.

Xamarin is a cross platform development tool to let you write mobile apps once and deploy to Android, iOS, or Windows Phone.  It’s not from Microsoft, but it’s a .Net platform that allows you to write  your code in C# (and now supports F#).  Below are my in-session notes.

IMG_20150711_105821

 

  • Xamarin FORMS adds shared UI Code (this is new) – No more platform specific.
  • Xamerin has been around since 2000, so not a new or fly by night company.
  • They negotiate on pricing.
  • You have to pay TWICE if you want BOTH iOS And Android. UGH!
  • Xamerin forms is only for Enterprise. DOUBLE UGH!
  • Mac is required for iOS. TRIPPLE UGH!
  • Cloud testing available
    • Automatically test your app on hundreds of mobile devices. Select what to test on. They have a room in Europ filled with hundreds of phones and tablets.
    • Captures screen shots, etc…
  • Xamarin University – $1,995 per developer – Instructor live training. Free for a month right now – but there’s a catch. Only 2 of the courses are available
    • intro – what we’re about
    • and very first one (how to use it)
  • Paid gives you 3 months access to business tier – because you need it to go through the training.
  • Not only can you use C#, but you can also use F#.
  • You HAVE to know the specifics of each platform (iOS & Android)
  • Tools
    • Xamarin Studio (PC or Mac)
    • Visual Studio plugin for VS 2010 and higher (requires biz or enterprise or starter, just not indie)
  • If you want to build for Windows Phone, you have to have Visual Studio.
  • Xamarin Studio doesn’t support iOS
  • VS supports both iOS and Android
  • Xamarin Android Player (emulator) faster than Google’s. Runs on Windows & OSX
  • They have a few images (Lollipop image is available)
  • Doesn’t work well with Windows Phone emulator.
  • Xamarin supports Android Wear, Apple Watch, & Microsoft Band
  • about 90% of code can be shared across platforms
  • PCL = Portable Class Libraries used for the “core” code in multi-platform applications.
  • About 80% of a Xamarin Forms app will be located here.
  • Rosylin compiler already supported in Xamarin.
  • Xamarin Forms
    • Xamarin UI controls are an abstraction above each platform’s native controls, but compile down to platform specific controls. Provides a native experience on each platform.
    • Layouts are common screen layouts that you can choose from.
    • Yes, you can nest layouts in them.
    • Forms made with XAML. — MVVM as a result.
    • Can also do it with code.
    • Extensibility
      • Can embed custom views anywhere.
      • Call platform APIs via shared services.
      • You can go full native API if you want (kind of defeats the purpose of using Xamarin though)
  • Custom Renderers
    • You can override a renderer for a specific platform.
  • Xamarin Forms
    • Reflection will be a problem on iOS because there’s no runtime on iOS.
    • App Quality control
    • Xamarin Insights
      • Real time monitoring, track crashes, know of user problems before they report, get user’s e-mail address, etc…

 

Calling iSeries DB2 web services from .Net

Problem

Instead of giving you direct access to make DB2 calls to your corporate iSeries, your IT department is exposing iSeries database capabilities via web services provided by IBM’s WebSphere.  Sounds great, but as you’ve experienced, it’s a major pain because they’re not implemented the way you expect.

Solution

  1. Get the URL from your iSeries team for their web service.
  2. In your .Net project, right click “References” or “Service References” and choose “Add Service Reference”.
  3. In the “Add Service Reference” dialog, enter the URL they provided to you into the “Address:” field.  Be sure you add “?wsdl” to the end of it if it’s not already there then click “Go”.
  4. You might be prompted for credentials.  Be sure to get those credentials from your iSeries team.  You will likely be prompted to enter them 3 or more times.  Yes, it’s a nuisance, but just do it.

Now you’ve got the web services added, but you’ve got some config file editing to do.

In your app.config or web.config file, find your custom binding for this service.  If you had to enter credentials, you’ll need to change the httpTransport to what you see below, but the realm will be different.  Get that from your iSeries team.

<customBinding>
  <binding name="Some_Crappy_NameServicesPortBinding">
    <textMessageEncoding messageVersion="Soap12" />
    <httpTransport authenticationScheme="Basic" realm="Secure_SOMETHING" />
  </binding>
</customBinding>

If your iSeries requires credentials, you’ll need to set them on your web proxy like this before you call a method:

ClientCredentials.UserName.UserName = username;
ClientCredentials.UserName.Password = password;

Now, to call a method on the web service is quite different.  You’ll have 4 classes provided in your proxy:

  1. Input
  2. Request
  3. Response
  4. Result

Each one has a name prefixed with “fweb” and some number.  For example, “fweb12Input”.  Each web service your iSeries team adds will have a new number.  Yes, this is entirely backasswords and highly inconvenient, but that’s the way IBM has done it.

You’ll want to instantiate a request object.  It has a field called “arg0” in it.  You’ll want to assign that to a newly instantiated Input object.  The Input object has fields in it representing what would normally be parameters to a web method.  Here’s an example:

var request = new fwebr024Request
{
    arg0 = new fwebr024Input
    {
        IN_FIRSTNAME = "John",
        IN_LASTNAME = "Smith",
        IN_SESSIONID = "Whatever",
        IN_USERID = "DOEJANE"
    }
};

Then you’ll call the web service like this:

var result = this.MyServiceProxy.fwebr024(request);

The result object will have the output of the web service.  It has a strangely named field called “@return”, which is an object with 3 fields representing any error that might have occured:

  1. OUT_ERRCODE
  2. OUT_ERRSTATE
  3. OUT_ERRTEXT

That’s it.  It’s pretty harry, but that’s how you do it.

See these images?

image

You’ll find actual working versions of them at the top and bottom of this article. Please click the appropriate buttons in it to let your friends know about this article.

Check back later for updates too!

Add a web.config transform and a publish profile in Visual Studio 2013

If you need to deploy your app to multiple environments, like in most corporate IT shops (dev, QA, Staging, Training, Production, etc…), then you’ll need to have multiple versions of your config files, or better yet, one config file and “transform” files (one for each deployment environment) that describe ONLY the differences between the main config file and that particular environment.

For example, the connection string in the QA environment is likely different than the connection string in your Dev environment, which is different than your QA environment and again different in your live Production environment.

[GARD]

I’m not going to explain how to WRITE a config transform (how to tell it what needs to change).  At least, not in THIS article.  But I will tell you how to tell Visual Studio that you have multiple environments and how to make Visual Studio create the basic config transforms for you.

In this example, I’m creating a WCF Service application (works the same with pretty much any web type of application).

  1. Are you deploying a NON Asp.Net app (like a click-once app or a WinForms or WPF app)?  If so, install the Nuget package “Slow Cheetah”.  Why?  Because Visual Studio has built in support for all this for web.config files, but NOT for app.config files.  Slow Cheetah lets you make transforms for ANY file in your project.
  2. Right-Click your project and choose “Publish…”transform_01
  3. In the “Publish Web” dialog, choose “Custom”transform_02
  4. Give it a name.  NOTE!  If you have a different admin managing your deployment and/or build servers, you may want to check with them on what name to use, because it will make a difference between whether your stuff works or doesn’t!  For this example, I’ll call the transform “QA”transform_03
  5. Choose your deployment method (web publish, file copy, etc…).  For this example, I’m choosing “File System” since it requires fewer settings to fill out and I’m going to leave “Target location:” blank.  My deployment admin will fill that in later, so I don’t even need to know this.  Click “Next”.
  6. Choose whether this deployment should be a “Release” or a “Debug” deployment.  This will cause it to build it as debug or release. (You will also have a debug and a release transform of your web.config file and this new QA transform will inherit from either of those).transform_04
    1. Expand “File Publish Options” and check the items you need, then click “Next”
  7. Final screen in the wizard.  Click “Close”.  You can’t click “Publish” if you left the target path blank above.

[GARD]

You’ve now successfully created a publish profile.

transform_05 QA done

Now you’ll need to create a Web.config transform for this profile.

  1. Rich-Click your QA.pubxml file and choose “Add Config Transform”.  Do NOT choose “Add Transform” if you have Slow Cheetah installed.transform_06 QA Add Transform

You now have a new Web.QA.config file.

transform_07 Complete

You can now code your base Web.config file the way you need it to run locally during development.  In your Web.QA.config file, you can add transforms to modify settings in your web.config file so that when you build for that environment, Visual Studio will produce a web.config file that’s right for that environment.

You can repeat these steps to add as many publish profiles as you need.

See these images?

image

You’ll find actual working versions of them at the top and bottom of this article. Please click the appropriate buttons in it to let your friends know about this article.

Check back later for updates too!

 

 

 

Creating a NuGet package

This is a short and dirty post, which does NOT cover every possibility.  This post assumes you’re done writing and testing your project and are now ready to deploy as a NuGet package, that you’re on a Windows PC, and are using Visual Studio.

[GARD]

  1. Open a PowerShell command prompt and CD into your project folder where your .csproj file lives.
  2. type nuget spec
    1. You might have to do:  nuget spec -f  to overwrite an existing nuspec file.
  3. Edit the *.nuspec file created and change the variables you need.  Note that the ones with $stuff$ are pulling from your Assembly.cs file.  Edit your Assembly.cs file to have the right stuff so you don’t have to re-enter it every time here.
  4. Some things in .nuspec don’t have attributes in the Assembly.cs file, so you’ll have to manually enter them, such as:
    1. Update text
    2. Tags
  5. Save the .nuspec file.
  6. From the PowerShell command line, type:  nuget pack MyProjectName.csproj
    1. or nuget pack MyProjectName.csproj -IncludeReferencedProjects  to make sure it includes the stuff it references.
  7. Now, copy your package file to your nuget repository and it should be available to other developers from within Visual Studio’s NuGet package manager.