Code coverage on angular application

Posted on Updated on

The goal of this post is to speak about code coverage on angular application
We define set of steps permit you to add code coverage after executing on angular application
1. We have to define configuration karma on application, browser etc.
2. And add karma reporter,
3. We have to also add script build, we write PowerShell permit you to add behavior.

For development need I suggest you to download visual studio code ( ) and install angular application (

We speak also around build continuous.
After creating application, modify karma.conf.js file by adding requirements on section .

basePath: '',
frameworks: ['jasmine', '@angular/cli'],
plugins: [
require( 'karma-phantomjs-launcher' ),

Register reporters

reporters: ['progress', 'kjhtml', 'coverage'],

Modify preprocessors node
preprocessors: {
// source files, that you wanna generate coverage for
// do not include tests or libraries
// (these files will be instrumented by Istanbul)
'src/**/*.js': ['coverage']

Add coverage reporter node
coverageReporter: {
type : 'html',
dir : 'coverage/'

Delete Chrome browser, and add PhatomJS, without this browser, it’s not possible to launch your tests, without blocking.
browsers: ['PhantomJS'],

Define attribute singleRun to true value.
singleRun: true

Full file :

// Karma configuration file, see link for more information

module.exports = function (config) {
basePath: '',
frameworks: ['jasmine', '@angular/cli'],
plugins: [
require( 'karma-phantomjs-launcher' ),
clearContext: false // leave Jasmine Spec Runner output visible in browser
coverageIstanbulReporter: {
reports: [ 'html', 'lcovonly' ],
fixWebpackSourcePaths: true
angularCli: {
environment: 'dev'
reporters: ['progress', 'kjhtml', 'coverage'],
preprocessors: {
// source files, that you wanna generate coverage for
// do not include tests or libraries
// (these files will be instrumented by Istanbul)
'src/**/*.js': ['coverage']
coverageReporter: {
type : 'html',
dir : 'coverage/'
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['PhantomJS'],
singleRun: true

Don’t forget to uncomment lines of polyfil.ts

/** IE9, IE10 and IE11 requires all of the following polyfills. **/
import 'core-js/es6/symbol';
import 'core-js/es6/object';
import 'core-js/es6/function';
import 'core-js/es6/parse-int';
import 'core-js/es6/parse-float';
import 'core-js/es6/number';
import 'core-js/es6/math';
import 'core-js/es6/string';
import 'core-js/es6/date';
import 'core-js/es6/array';
import 'core-js/es6/regexp';
import 'core-js/es6/map';
import 'core-js/es6/weak-map';
import 'core-js/es6/set';

Now we write script permitting you to launch code coverage :

param (
function Test-Front([string] $rootApplication)
Write-Host 'Angular Front - Begin testing' -Verbose
throw [System.ArgumentNullException] "$rootApplication"

Set-Location -Path $rootApplication

Write-Host 'Launch of command : npm i --save-dev karma-phantomjs-launcher'
npm i --save-dev karma-phantomjs-launcher

Write-Host 'Launch of command : ng test'
Write-Host 'angular tests are hosted by PhantomJS browser, you can check config into karma.conf'
ng test --code-coverage

Write-Host 'Angular Front - End testing' -Verbose

Test-Front -rootApplication "$rootApplication"

ng test –code-coverage : command permit to launch executing of tests and generate code coverage.

After executing, we have as result folder artefact containing metrics under the application (similar to dist folder after building process)

below screen describing results.


VSTS and build agent on Premise

Posted on

After DryRun migration on VSTS, you have to ensure that your builds run correctly, so each build server have specific capabilities, so it can be hard to construct your build agent on Azure, if you don’t have infra as code (Template ARM, etc.), or try with hosted agent.
In order to validate the migration easily of your build layer, the practice is to define connection between your VSTS platform and your premise build agent.

Before to use hosted agent build. For information below description of each agent available on VSTS.

• Hosted VS2017 if your team uses Visual Studio 2017.
• Hosted Linux if your team uses development tools on Ubuntu.
• Hosted macOS Preview if your team uses development tools on macOS.
• Hosted if your team uses Visual Studio 2013 or Visual Studio 2015.

I suggest you to follow theses steps :

1.Firstly go to your DryRun build board

2.Download agent for Windows, this action permit you to download file agent


3. Connect to your premise server (Not azure server for this use case), create VSTS directory under your C:\, copy paste your agent file and unzip your file into this last.

4. Add proxy file into this directory.

5. Add following environment variables with good values :

  • http_proxy
  • https_proxy
  • no_proxy

6. Unzip your agent file, below result


7. Run your powershell ISE as administrator.

8. Go to your specific directory, for our case it’s test, contains all agent of test VSTS platform

cd C:/vsts/test
PS C:\vsts\test> .\config.cmd

9. Provide following answer for each question.

Q1. Enter server URL ?
Enter the address of your dryrun
Q2. Enter authentication type (press enter for PAT)  ?
Press Enter button
Q3. Enter personal access token  ?
Follow steps below in order to generate token

For this need, go to security section


Click on security section


Click on Add button in order to add new token with full access.


Copy paste the token into POWERSHELL ISE command prompt


After that premise server try to connect to your vsts, in order create and configure agent.

Q4. Enter agent pool (press enter for default) ?
Define  default as value


Q5. Enter agent name (press enter for YourServer-1) ?
Define  YourServer-1 as value


Q6. Enter work folder (press enter for _work)?
Define _work as result


Q7. Enter run agent as service? (Y/N) (press enter for N) ?
Enter Y for response


Q8. Enter User account to use for the service (press enter for NT AUTHORITY\NETWORK SERVICE)?
Press Enter for this question

Below list of answers for each question


After that we can go to pool agent in order to ensure that our agent is good configured.


How to migrate to VSTS steps by steps

Posted on Updated on

With the multiples Team foundation server platforms 2015, 2017 etc., and the generalization of the VSTS platform, you have a lot of companies which want move from TFS 2015 to VSTS, without passing by 2017 step.
However, for this constraint you add another steps that is to restore your database on TFS2017, before go on VSTS.
This post will describe the steps related to this use case.

Here post about differences Between TFS and vsts link

Remark : Before final migration, you pass by DryRun step, this later permit you to check your state of migration.

Below steps about migration :

1. Dettach your collection on TFS 2015.

2. Backup your collection on your TFS 2015 (I suggest you to use administration console in order to backup and restore data layer).


3. Restore your collection on TFS 2017.

Remark : TfsMigrator tool ensure migrating from 2017 to VSTS, so it’s required to pass by this temporary step.

Download TFSMigrator tool from here link, Tfs Mgrator is console application, you have lot of command for each technical need, we begin always by validating your migration, next step is preparing and we finish by importing.

Remark : Preparing process execute always firstly validating process.

4. We run validating command

TfsMigrator validate /collection: http://yourServer/yourCollectionName

You have to correct all errors before run migration, if you have warning it’s not important, you can ignore.

For example if you have errors about maching between and group sid, I suggest you to run TFSSecurity (C:\Program Files\Microsoft Team Foundation Server 15.0\Tools).


TFSSecurity.exe /a+ Identity "397c326b-b97c-4510-8271-75aac13de7a9\\" Read sid:S-1-9-XXXXXXXXXX-XXXXXXXXXX-XXXXXXXXXX-XXXXXXXXXX-0-0-0-0-3 ALLOW /collection:https://YourServer:8080/tfs/YourCollection

Run another times validate command until passe validating process.
After validating of your migration, for preparing of your migration run prepare command.

Below screen describing command.

tfsmigrator prepare /collection:http://yourServer/yourCollectionName /t /accountregion:WEU
  • Enter your collection’s url
  • Enter tenant domain name

Remark : With new version ensure that you have added accountregion:WEU.

Below you have set of generated and modified files, in previous sections.


Now we can view content of json file, and modify attributes values.

Foreach attributes we have set of actions to execute, in order to define value.


We have defined test as name of account name, with this name you can check your migration with this url :

We define flag DryRun, in order to significate that result if test version of migration.

We generate dacpac file in the root of TfsMigrator, so we indicate the path of dacpac file.

For generating of dacpac file corresponding you database, SqlPackage is defined tool for operation.

GO to this link C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin.

Just before execute SqlPackage for generate dacpac file, dettach collection

Below command associated, set values of server and database, ensure that we have target file in output :


SqlPackage.exe /sourceco

nnectionstring:"Data Source=TourSrver;Initial Catalog=Yourdatabase;Integrated S

ecurity=True" /targetFile:c:\DACPAC\Tfs_test.dacpac /action:extract /p:ExtractA

llTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:St



After generating dacpac file , we copy this later into migration directory, under TfsMigrator.

Now we try to define storage accoutn url, for this need we follow steps below :

Download Microsoft Azure Storage Explorer, connect to your subscription.

Open Storage accounts under your subscription

Open Blob containers,

Right click and create specific container under BlocContainer.


Define access signature with Expiry time of unless 1 month.


Ensure that you have good expiry data, and all authorizations.

Click on create button.


Click on copy button in order to copy and paste this later on your json file as storage’s url into Location attribute.

Upload identity map file and dacpac file into azure explorer, by clicking on Upload button.

Below two files uploaded.



Ensure below that we have imported all files.10

Now open command prompt and run import command in order to finalize migration.

tfsmigrator import /importFile:import.json

After running you have this screen.


You can also follow migration  screen on board.







How to override summary of your release or build ?

Posted on Updated on

If you want to add additional panel into your summary of your release instance,
I suggest you to add new markdown, upload this last with vso specific task.
Aggregate this development into custom task link.

Below detail of your development :

1. Begin by adding your mark down template file, set this later into task directory.

2. Add your markdown data file , into the same directory.

3. below content of your two files.

Template is (Template file ensure that we load for every release the same template with same keys) :

# Section name #
- Artefact 1 : @@Artefact1@@
- Artefact 2 : @@Artefact2@@

Data is (Data file is published on server for every release):

# Section name #
 - Artefact 1 : @@Artefact1@@
- Artefact 2 : @@Artefact2@@

Remark : Theses two files have same content.

4. Now we add Wrapper.ps1 file permit you to execute backend treatment, below content of your file.
For our case, we decide to get datas of testing layer.

  [string] $artefact1,
  [string] $artefact2

Function permit you to access vsts or tfs platform, without enter credentials.

function Get-AccessToken
	Write-Verbose -Verbose "Getting Personal Access Token for the Run" 
	$vssEndPoint = Get-ServiceEndPoint -Name "SystemVssConnection" -Context $distributedTaskContext
	$personalAccessToken = $vssEndpoint.Authorization.Parameters.AccessToken
	if (!$personalAccessToken) 
		throw "Could not extract personal access token. Exitting"     

Load template file markdown, access datas and set datas into data file.

$template = (Get-Content $PSScriptRoot\

#replace artefact
$template.Replace("@@Artefact1@@", $artefact1) | Set-Content $PSScriptRoot\

#get access
$token = Get-AccessToken
$returnObj = @{}
$url = "$($env:SYSTEM_TEAMFOUNDATIONSERVERURI)$($env:SYSTEM_TEAMPROJECT)/_apis/test/runs?api-version=1.0"
$response = Invoke-RestMethod -Method Get -Uri $url -Headers @{Authorization = "Bearer $token"}

if($response -eq $null)
	Write-Host "We don't have datas..."
        (Get-Content $PSScriptRoot\"@@Artefact2@@", $artefact2) | Set-Content $PSScriptRoot\ 


Upload specific section into summary, with vso tasks.

Write-Host "##vso[task.uploadsummary]$PSScriptRoot\"

Documentation link vso tasks

Below screen describing result


Access to VSTS Releases by C# in preview version

Posted on Updated on


This post describes somes lines of code which permit you to access VSTS platform (Visual Studio Team Service), we describe fundamntal steps permit to access new objects and services, we begin by decsribing packages that you have to download from NUGET in order to develop access layers.

The benefits of this post is to identify easily list of packages and class that you have known, in order access datas, without forget that we are in preview version.

Firslty begin by adding theses packages, we remarks that we add Microsoft.VisualStudio.Services.Release.Client package with preview version.

Below full list of packages that you have download :

-Microsoft.AspNet.WebApi.Client with version =5.2.2
-Microsoft.AspNet.WebApi.Core with version =5.2.2
-Microsoft.IdentityModel.Clients.ActiveDirectory with version =3.13.5
-Microsoft.TeamFoundation.DistributedTask.Common.Contracts with version =15.120.0-preview
-Microsoft.TeamFoundationServer.Client with version =15.120.0-preview
-Microsoft.TeamFoundationServer.ExtendedClient with version =15.120.0-preview
-Microsoft.Tpl.Dataflow with version =4.5.24
-Microsoft.VisualStudio.Services.Client with version =15.120.0-preview
-Microsoft.VisualStudio.Services.InteractiveClient with version =15.120.0-preview
-Microsoft.VisualStudio.Services.Release.Client with version =15.120.0-preview
-Newtonsoft.Json with version =8.0.3
-System.IdentityModel.Tokens.Jwt with version =
-WindowsAzure.ServiceBus with version =3.3.2

Below source code permit you to create release definition, we define VssClientCredentials object permit to ensure connection to your VSTS, so you need to register your login and your token.

If you wish to get token, go to security panel of your VSTS, and create basic token with retency of 90 days.


We create VssClientCredentials with credentials, for etablish connection with VSTS we define VssConnection object based on credentials.

ReleaseHttpClient is the service permit you to access releases. based on ReleaseHttpClientBase abstract class.

ReleaseHttpClientBase permit you to create, delete, update, modify your definitions.

In this sample we create definition of release.

 VssClientCredentials credentialsDestination = new VssClientCredentials
                            (new VssBasicCredential(ConfigurationManager.AppSettings["loginForMigration"],

            using (VssConnection connectionDestination = new VssConnection(serverUrlDestination, credentialsDestination))
                using (var releaseHttpClient = connectionDestination.GetClient())
                        Task releaseDefinitionTask = releaseHttpClient.CreateReleaseDefinitionAsync(releaseDefinition, projectName);
                    catch (Exception ex)
                        throw ex;

After describing call of CreateReleaseDefinitionAsync, we speak about business rules that definition have to respect.

Constraints on VSTS :

  • Definition must have Id.
  • Definition must have Name unique.
  • On each environment of your release you have to define RetentionPolicy with three properties DaysToKeep, RetainBuild and ReleasesToKeep.
  • We three types of agents, in order to stay coherent with TFS platform, we use AgentDeploymentInput type as agent.

Remarks : In the future evolutions we preconise to use Deployment group, so replace AgentBasedDeployPhase type with MachineGroupBasedDeployPhase type.

 foreach (var environment in definition.Environments)
                environment.RetentionPolicy = new EnvironmentRetentionPolicy
                    DaysToKeep = daysToKeep,
                    RetainBuild = true,
                    ReleasesToKeep = releasesToKeep

                var agentDeploymentInput = new AgentDeploymentInput

                if (agentName == AgentNameDefaultConstant)
                    agentDeploymentInput.QueueId = 1;

                DeployPhase deployPhase = new AgentBasedDeployPhase
                    Name = deployPhaseName,
                    Rank = deployPhaseRank,
                    DeploymentInput = agentDeploymentInput


                // Adjust tasks of step
                if (environment.DeployStep != null &&
                    deployPhase.WorkflowTasks = .....;

All tasks are contained on DeployPhase object.


Subscriber Policy TFS check-in

Posted on Updated on

In order to filter about your check-in operations, for example for branching organised arround Featuring, constraint is to ensure that we don’t check-in on main branch, because it’s later have to stay clean, you haven’t check’in source on Main branch, all changes have to contain merge action , after quality test process on features.

So this post will speak about create custom check-in policy, we have two solutions :

  • First is to create subscriber, deployed on server, centralized for all TFS instances
  • Second is to create VSIX, deployed if needed by an customer

We develop into this post custom subscriber, permitting to filter on changes, ensuring that we have unless merge action.

Remark : Constraint is due to news TFS APIs 2017.

Below list of steps that you have to follow in order to accomplish this goal

1.Create library class

2. Add reference to following assemblies :

  • Microsoft.TeamFoundation.Common
  • Microsoft.TeamFoundation.Framework.Server
  • Microsoft.TeamFoundation.Server.Core
  • Microsoft.TeamFoundation.VersionControl.Server

3.Create class named MergeNotificationSubscriber implementing ISubscriber interface

Below specific implementing of members :

public string Name
throw new NotImplementedException();

public SubscriberPriority Priority
throw new NotImplementedException();
public EventNotificationStatus ProcessEvent(IVssRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties)
throw new NotImplementedException();

public Type[] SubscribedTypes()
throw new NotImplementedException();

For Name property, you can define your message of your strategy

public string Name
return "Require Merge operations on Integration and Production branch";

For your priority, you can set to high

public SubscriberPriority Priority
return SubscriberPriority.High;

For type of operations, we specify check-in notifications

/// Subscribeds the types.
/// permit to define subscribed tyeps public Type[] SubscribedTypes()
return new Type[1] { typeof(CheckinNotification) };

Our logic will be placed into ProcessEvent method, note that we have condition on DecisionPoint, signify that we catch request before check-in action (Similar to http module, we pass every time into code on server forfor each request).


/// Processes the event.
///The request context. 
///Type of the notification. 
///The notification event arguments. 
///The status code. 
///The status message. 
///The properties. 
/// permit to define status public EventNotificationStatus ProcessEvent(IVssRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties) 


const string IntegrationPatternTeamProject = "$/..."; 
const string ProductionPatternTeamProject = "$/..."; 
const string TemplateMessageWindowConstant = "Check-in was rejected due to a type action on item, have to be equal to merge"; 
const string CollectionNameConstant = "..."; 
statusCode = 0; 
properties = new ExceptionPropertyCollection(); 
statusMessage = String.Empty; 

//Ensure that policy is applied just on target collection 
if(requestContext.ServiceHost.Name != CollectionNameConstant) 
return EventNotificationStatus.ActionPermitted; 

if (notificationType == NotificationType.DecisionPoint) 
if (notificationEventArgs is CheckinNotification) 
CheckinNotification checkinNotification = notificationEventArgs as CheckinNotification; 
var teamFoundationVersionControlService = requestContext.GetService(); 
var submittedItems = checkinNotification.GetSubmittedItems(requestContext); 

if (!submittedItems.All(x => x.Contains(IntegrationPatternTeamProject) || x.Contains(ProductionPatternTeamProject))) 
return EventNotificationStatus.ActionPermitted; 

using (var teamFoundationDataReader = teamFoundationVersionControlService.QueryPendingChangesForWorkspace(requestContext, checkinNotification.WorkspaceName, checkinNotification.WorkspaceOwner.UniqueName, checkinNotification.GetSubmittedItems(requestContext).Select(i => new ItemSpec(i, RecursionType.None)).ToArray(), false, 100, null, true)) 
if (teamFoundationDataReader != null) 
var nonMergedItem = teamFoundationDataReader.CurrentEnumerable().FirstOrDefault(c => c.MergeSources == null || c.MergeSources.Count == 0); 

if (nonMergedItem != null) 
TeamFoundationApplicationCore.Log(TemplateMessageWindowConstant, 2010, EventLogEntryType.Information); 
statusMessage = TemplateMessageWindowConstant; 
return EventNotificationStatus.ActionDenied; 

return EventNotificationStatus.ActionApproved; 


Get Release Management Datas With API REST TFS

Posted on Updated on

Microsoft has released a documentation of the VSTS and TFS integrating REST APIs. In the recent past we were using the client object model and API to interact with TFS. But it was very difficult for client which don’t have .Net Framework installed, and after installation of this later, you must to have Visual Studio, install dependencies on TFS assemblies, know C# …..
Now you can interact with VSTS and TFS via POWERSHELL scripts, or another language, with simple instructions.
It implies also an openness to other technologies and also ease of use.
You specify different HTTP verbs (such as GET, PUT, POST and PATCH) and the connection point to a specific URI (Uniform Resource Identifier) to interact with the API.
In our post we will define the URI of our TFS server, our project collection and also that of the team project.

$UriCollectionProject =  $TFSInstanceURL+"/"+$ProjectCollection+"/_apis/projects?api-version=2.2-preview.1"
$CollectionResponse = Invoke-RestMethod -Method Get -Credential $credential -ContentType application/json -Uri $UriCollectionProject  

When pointing to TFS, you can pass a username and password (masked as a secret variable) through
So we must to add theses lines below

$User = "Aghilas.yakoub"
$Password = "MyPassword" 
$securePassword = $Password | ConvertTo-SecureString -AsPlainText -Force  
$credential = New-Object System.Management.Automation.PSCredential($User, $securePassword)  

Us in this post we will try to list Releases runned by a specific person.
So secondly we will try to retrieve the list of team projects in a collection.
Emeber that a collection contains a set of project and a project contains a set of releases.
Below calling a GET on the URL of the collection.

$UriCollection =  $TFSURL+"/"+$ProjectCollectionName+"/_apis/projects?api-version=2.2-preview.1"
$CollectionResponse = Invoke-RestMethod -Method Get -Credential $credential -ContentType application/json -Uri $UriCollection

$CollectionResponse contains set of team projects.

foreach ($project in $CollectionResponse.value)
    $TeamProject= $project.Name 

Now in a second time we will try to retrieve the list of release on a team project.
We will use the following suffix: /_apis/release/releases?api-version=2.2-preview.1

Ref :

#Construct URI for query team prpject and his releases
$Uri = $TFSURL +"/"+$ ProjectCollectionName +"/"+$TeamProject+"/_apis/release/releases?api-version=2.2-preview.1"

#Get response of previous uri
$releaseresponse = Invoke-RestMethod -Method Get -Credential $credential -ContentType application/json -Uri $Uri 

Now we must just get created releases by Aghilas

    foreach ($releaseInstance in $releaseresponse.value)
	If($releaseInstance.createdBy –eq “Aghilas”)


Document your extension TFS before publish – Markdown

Posted on Updated on

This post is here in order to help you to improve quality on published extension on market place, because we have lot of extensions without documentation, so it’s very hard to use theses, we can add documentation easily by defining markdown.
We have set of steps to follow in order to document.

1. Create your markdown file , add to your library, if you have one extension library with set of task, suggest you to have one mardown for all tasks.

Remarks: Your file don’t need to be copied in content, let default options.


Below samples of common syntax of markdown :

# This is an H1
## This is an H2
###### This is an H6
*Italic characters*
_Italic characters_
**bold characters**
__bold characters__
~~strikethrough text~~
This is [an example]( inline link.

You can also install markdown extension in order to modify easily :


After installing Markdown Editor, can view the design of my code, view small symbol M on my file.


complete doc here :

2. Register your file into manifeste extension by adding section of code below


3. For this sample we add images in, theses images must be copied to directory img (content / do not copy)
and registered into manifeste extension


Now after packaging and publishing actions, if we go to market place manager, we can view documentation of your extension.


Package and publish your extension TFS 2015 VNEXT

Posted on Updated on

In order apeak about solution , we explain just that we have two aspects to setup, packaging and publishing, we begin with our sample with packaging.


For the packaging we follow steps below in order to construct our package, the result of this step is vsix file.

1. Setup Node.js

2. Install vset tool with this command : npm install vset –save-dev


3. Run Node.js command prompt tool (C:\Windows\System32\cmd.exe /k “C:\Program Files (x86)\nodejs\nodevars.bat”)

4. Locate you on directory of your extension

5. Run vset package command


Your vsix is generated into your root irectory


For information you can inspect content of your extension, by unzip your vsix



Before publish our extension it’s possible to visit differents extension avaiable on the cloud, it’s possible to download and install on premise version of tfs, in order to reuse.

We choose to publish our expansion on the market place on Azure

We follow theses steps, firstly we create a publisher, secondly

1. Go to
2. Choose to create a publisher by completing a unique Id

After that
3. Upload your extension by drag and drop your vsix

4. Correct and adapt your manifest information


5. Now it’s ok and my vsix is downloaded


6. Share my extension on my account azure, an it’s possible to update my version on clickin on update button


7. Go to admin tfs section, and selection extension tab, clck on my extension target



8. On Build VNEXT find my created extension


Create your extension TFS 2015 VNEXT

Posted on Updated on

Start by downloading a template project, you have two project templates, one dedicated to the integration part, another dedicated to the realization of custom build or release tasks.

In this post we will realize an extension that will aim to start an application pool, without managing this specify.

For the part back we will combine our extension with a shell script.

Begin with open Explorer Visual Studio extensions,

Install the following template :


Below is a detail on the different of your solution.

Most of these directories will be deleted in the second time.


Now we will start developing our own extension,

First, delete the files that we use nothing:

  1. Remove the test directory
  2. Remove typings directory
  3. Delete the file app.ts
  1. Add directory Sample
  2. Create a powershell file Sample.ps1
  3. Create a task manifest file task.json
  4. Create logo into directory

Below the output of created project


  1. Edit the task.json by defining your layout based on controls and groups concepts, inputs are your controls typed with type propecty, and are grouped into groups, by using group property (Below sample of grouping, we have three groups)


Below the task.json file after modify


We have another section related to call code behind, for our project, code behind is powersell, it can be another type

  1. Implement your Code behind and match with your layout arguments

Ensure that arguments ps1 match with json arguments, in execution section on your json, ensure that you have referenced your target file.



  1. For debugging after implementing, open your ps ise, you can find-it in C:\Windows\System32\WindowsPowerShell\v1.0\powershell_ise.exe

Ensure that you have right of debugging by executing this script, and just click F5


  1. Register your extension in the manifeste