Blog Feed Info

This page provides an aggregate of feeds from Colorado SQL community members that blog about SQL Server.  If you live in Colorado and would like your fieed listed, please send us an email at and let us know.

ColoradoSQL Bloggers

Friday, May 22, 2015 9:45:18 AM
I am proud to announce that I have been selected to present a webinar as part of the 24 Hours of PASS series on June 24-25! The theme of this 24 Hours of PASS is Growing Our Community, which is all about speakers who have not yet presented a regular session or pre-con at the PASS Summit in previous years. While I have presented at the last 2 PASS Summits in both cases they were 10 minute Lightning Talks, not a regular session (fingers crossed that I get one this year). If you have not yet registered for PASS Summit 2015 in Seattle on 10/27 - 10/30, then use this link to register and the discount code LC15WTG2 to get $150 off!

I will be presenting my "Introduction to the Power BI Designer" session at noon MT/18:00 GMT, which I have given at other venues a couple of times before and is now my new favorite session. I love how the new Power BI is coming along and it is also a challenge to demo with the product being updated at least once every month.

If you can attend any of these 24 sessions starting at 6am MT/noon GMT, please register as there is a fantastic line-up of sessions/speakers for these 24 hours and I'm honored to be in this group!

Thursday, May 14, 2015 4:38:37 PM

How I collect network statistics during benchmarks.
by the hardworking DBA…
as of 2/17/2013

I am benchmarking an Oracle stored procedure these days.  We have a 1 second SLA on this procedure.   It returns a rather large payload, sometimes exceeding 10,000 records and 5MB of data.   I have challenged the justification for that, and I am reassured that is what the application requires.  

I was especially curious how many network packets it takes to move that much data.   The ping command told me the average network round-trip speed between the server host and the client host is 0.14s.    That might be OK for a small payload, but if it takes 250,000 packets per call, at 0.14ms per packet, that would threaten my SLA.

If I wanted to check the impact of this stored procedure on the network, how could I do that?

Linux has a command, “netstat –s”, that produces the report shown in Listing 1 below.    It reports a lot of statistics about network usage.   I could run that before and after, and compare the results, right? 

The challenge is that the statistics are relative to when the machine last started.   I only care about statistics during a short period of time.   I cannot easily compare two 10-digit numbers that show total packets received, and tell if the difference is significant.    And a lot of the statistics reported don’t change at all.   I only care about the measures that changed.  

The solution is to
            1) take a “netstat –s” snapshot before the benchmark.
            2) run the benchmark.
            3) take a second “netstat –s” snapshot after the benchmark, and
            4) report any differences in a meaningful, human-readable way.

The result is a report such as the one shown in Listing 2.   While I was developing the script,  I followed the netstat-differential report with some other statistics from sar and a list of the machines connected to my node.   Those things have helped me talk to few other issues.  It doesn’t hurt to know such things.  

The script that produces Listing 2 is shown in Listing 3.

Check it out.  Maybe that will give you an idea?

Best wishes to always have good network throughput!
~ the hard-working DBA

Listing 1 – Typical output from the netstat –s command.

> netstat -s
    1346387939 total packets received
    2943839 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    1282689297 incoming packets delivered
    3791124361 requests sent out
    19013 outgoing packets dropped
    6 fragments dropped after timeout
    128 reassemblies required
    26 packets reassembled ok
    6 packet reassembles failed
    136 fragments received ok
    19012 fragments failed
    422 fragments created
    74159 ICMP messages received
    255 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 24427
        timeout in transit: 7
        echo requests: 3238
        echo replies: 46462
        timestamp request: 17
        address mask request: 8
    93967 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 25376
        echo request: 65521
        echo replies: 3053
        timestamp replies: 17
        InType0: 46462
        InType3: 24427
        InType8: 3238
        InType11: 7
        InType13: 17
        InType17: 8
        OutType0: 3053
        OutType3: 25376
        OutType8: 65521
        OutType14: 17
    18330381 active connections openings
    10578845 passive connection openings
    31217 failed connection attempts
    804128 connection resets received
    150 connections established
    1104945674 segments received
    3666265534 segments send out
    2514600 segments retransmited
    31 bad segments received.
    242582 resets sent
    122170701 packets received
    16624 packets to unknown port received.
    0 packet receive errors
    122252674 packets sent
    89760 invalid SYN cookies received
    2411 resets received for embryonic SYN_RECV sockets
    65 packets pruned from receive queue because of socket buffer overrun
    211 ICMP packets dropped because they were out-of-window
    14535660 TCP sockets finished time wait in fast timer
    181501 time wait sockets recycled by time stamp
    1370 packets rejects in established connections because of timestamp
    126238132 delayed acks sent
    43908 delayed acks further delayed because of locked socket
    Quick ack mode was activated 1727250 times
    372 times the listen queue of a socket overflowed
    372 SYNs to LISTEN sockets ignored
    521838861 packets directly queued to recvmsg prequeue.
    3963876152 packets directly received from backlog
    3457491267 packets directly received from prequeue
    293190 packets dropped from prequeue
    3307290762 packets header predicted
    3884415728 packets header predicted and directly queued to user
    886041005 acknowledgments not containing data received
    1618307249 predicted acknowledgments
    1 times recovered from packet loss due to fast retransmit
    549898 times recovered from packet loss due to SACK data
    3 bad SACKs received
    Detected reordering 163 times using FACK
    Detected reordering 3012 times using SACK
    Detected reordering 613 times using time stamp
    616 congestion windows fully recovered
    7910 congestion windows partially recovered using Hoe heuristic
    TCPDSACKUndo: 459
    10024 congestion windows recovered after partial ack
    2549350 TCP data loss events
    TCPLostRetransmit: 28
    1 timeouts after reno fast retransmit
    5440 timeouts after SACK recovery
    1123 timeouts in loss state
    2193866 fast retransmits
    93844 forward retransmits
    99062 retransmits in slow start
    71073 other TCP timeouts
    TCPRenoRecoveryFail: 1
    12967 sack retransmits failed
    265222 times receiver scheduled too late for direct processing
    17038 packets collapsed in receive queue due to low socket buffer
    1721513 DSACKs sent for old packets
    49 DSACKs sent for out of order packets
    10316 DSACKs received
    550 DSACKs for out of order packets received
    29 connections reset due to unexpected SYN
    770680 connections reset due to unexpected data
    53290 connections reset due to early user close
    4355 connections aborted due to timeout
    InMcastPkts: 862471
    InBcastPkts: 54622317

Listing 2 – An example of the output from the script.
Netstat Differential Report.

                                      total packets received: 8210
                                      with invalid addresses: 1
                                  incoming packets delivered: 8173
                                           requests sent out: 8670
                                 active connections openings: 68
                                 passive connection openings: 31
                                  connection resets received: 2
                                           segments received: 8012
                                           segments send out: 8522
                                            packets received: 148
                                                packets sent: 148
                TCP sockets finished time wait in fast timer: 53
                                           delayed acks sent: 211
                packets directly queued to recvmsg prequeue.: 4746
                     packets directly received from prequeue: 2240426
                                    packets header predicted: 1601
        packets header predicted and directly queued to user: 4731
                acknowledgments not containing data received: 622
                                   predicted acknowledgments: 5877
                    connections reset due to unexpected data: 2
                                                 InBcastPkts: 13

End of Netstat Differential Report.
Recent Network stats reported by SAR.
12:00:01 AM     IFACE   rxpck/s   txpck/s   rxbyt/s   txbyt/s   rxcmp/s   txcmp/s  rxmcst/s
07:50:01 PM        lo     15.97     15.97  49533.85  49533.85      0.00      0.00      0.00
07:50:01 PM      eth0    139.20    179.11  90875.86 131371.47      0.00      0.00      0.44
07:50:01 PM      eth1     34.23     35.50   6752.12  18519.29      0.00      0.00      0.44
07:50:01 PM      eth2      1.75      0.01    159.03      0.65      0.00      0.00      0.44
07:50:01 PM      eth3      1.99      0.00    183.95      0.11      0.00      0.00      0.44
07:50:01 PM      sit0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Connections are seen to the following hosts.

Listing 3 – Korn-shell script that produces the netstat differential report shown above.

> cat netstat_differential_report.ksh

# This script simplifies taking a netstat snapshot before a benchmark, taking
# another netstat snapshot after a benchmark, and then reporting a
# comparison of just the network statistics that changed during the benchmark.
# See the show_usage function for more explanation.
# History
# Feb 2013 SWC  Wrote it.

export NET_FILE_1
export NET_FILE_2
show_usage ()
cat     <<-EOF
        $0 -n SNAPSHOT_NUMBER [-h|-?]
        -n enables specification of a snapshot number.

        When you say "-n 1", the script will run netstat -s and save the
                output for future reference.
        When you say "-n 2", the script will run netstat -s, and display a
                report of the differences in network statistics since
                the first snapshot was saved.

The temporary files used by this script are saved in a subdirectory called logs.

Best wishes.
exit 1
# Parse the command line parameters.
if [ $# -eq 0 ]; then

while getopts n: option
        case $option in
                n) SNAPSHOT_NUM=$OPTARG
                h) show_usage
                \?) show_usage


if [ "$SNAPSHOT_NUM." = "." ]; then
        echo "Error: Please specify which snapshot_number to process."
        exit 1

DIR=`dirname $0`
cd $DIR
BASE=`basename $0 .ksh`

if [ ! -d $DIR/logs ]; then
        mkdir $DIR/logs

# Use this version when the netstat statistic is at the start of the line.
report_netstat_diff_v1 ()

        grep "$S$" $NET_FILE_1 | awk '{print $1}' | read V1
        grep "$S$" $NET_FILE_2 | awk '{print $1}' | read V2
        let V=V2-V1
        if [ $V -ne 0 ]; then
                printf "%60.60s: %d\n" "$S" $V

                echo    $S      >> $NET_STRING_FILE
# Use this version when the netstat statistic follows a colon.
report_netstat_diff_v2 ()

        grep "$S" $NET_FILE_1 | cut -f2 -d':' | read V1
        grep "$S" $NET_FILE_2 | cut -f2 -d':' | read V2
        let V=V2-V1
        if [ $V -ne 0 ]; then
                printf "%60.60s: %d\n" "$S" $V

                echo    $S      >> $NET_STRING_FILE
# Use this version when the netstat statistic is in the middle of the line.
report_netstat_diff_v3 ()

        grep "$S1" $NET_FILE_1 | grep "$S2" | cut -f$FIELD_NUM -d' ' | read V1
        grep "$S1" $NET_FILE_2 | grep "$S2" | cut -f$FIELD_NUM -d' ' | read V2
        let V=V2-V1
        if [ $V -ne 0 ]; then
                printf "%60.60s _ %s: %d\n" "$S1" "$S2" $V

                echo    $S      >> $NET_STRING_FILE

if [ $SNAPSHOT_NUM -eq 1 ]; then
        rm -f $DIR/logs/netstat.*


        netstat -s > $NET_FILE

        if [ ! -f $NET_FILE ]; then
                print "Error: Failed to save snapshot 1."
                exit 1

        print "`date`: Saved netstat snapshot number 1."

elif [ $SNAPSHOT_NUM -eq 2 ]; then
        netstat -s > $NET_FILE

        print "`date`: Saved netstat snapshot number 2."

        # Separate the file contents by section
        for N in 1 2; do
                cat $DIR/logs/netstat.$N.log |
                while read LINE; do
                        if [ "$LINE" = "Ip:" ]; then
                        elif [ "$LINE" = "Icmp:" ]; then
                        elif [ "$LINE" = "ICMP input histogram:" ]; then
                        elif [ "$LINE" = "ICMP output histogram:" ]; then
                        elif [ `echo "$LINE" | grep -c "ICMP messages sent"` -gt 0 ]; then
                                # Continue putting lines back in the regular Icmp file
                        elif [ "$LINE" = "IcmpMsg:" ]; then
                        elif [ "$LINE" = "Tcp:" ]; then
                        elif [ "$LINE" = "Udp:" ]; then
                        elif [ "$LINE" = "TcpExt:" ]; then
                        elif [ "$LINE" = "IpExt:" ]; then

                        echo $LINE >> $OUT_FILE

        printf "%80s\n" " " | tr ' ' '='
        print  "Netstat Differential Report."

        # Now compare the different sections one at a time.
        for TYPE in Ip Icmp Icmp_input Icmp_output IcmpMsg Tcp Udp TcpExt IpExt; do
                rm -f $NET_STRING_FILE
                touch $NET_STRING_FILE

                print "$TYPE:"
                case $TYPE in
                                report_netstat_diff_v1 'total packets received'
                                report_netstat_diff_v1 'with invalid addresses'
                                report_netstat_diff_v1 'forwarded'
                                report_netstat_diff_v1 'incoming packets discarded'
                                report_netstat_diff_v1 'incoming packets delivered'
                                report_netstat_diff_v1 'requests sent out'
                                report_netstat_diff_v1 'outgoing packets dropped'
                                report_netstat_diff_v1 'fragments dropped after timeout'
                                report_netstat_diff_v1 'reassemblies required'
                                report_netstat_diff_v1 'packets reassembled ok'
                                report_netstat_diff_v1 'packet reassembles failed'
                                report_netstat_diff_v1 'packet reassembles failed'
                                report_netstat_diff_v1 'fragments received ok'
                                report_netstat_diff_v1 'fragments failed'
                                report_netstat_diff_v1 'fragments created'
                                report_netstat_diff_v1 'ICMP messages received'
                                report_netstat_diff_v1 'input ICMP message failed.'
                                report_netstat_diff_v1 'ICMP messages sent'
                                report_netstat_diff_v1 'ICMP messages failed'
                                report_netstat_diff_v2 'destination unreachable'
                                report_netstat_diff_v2 'timeout in transit'
                                report_netstat_diff_v2 'echo requests'
                                report_netstat_diff_v2 'echo replies'
                                report_netstat_diff_v2 'timestamp request'
                                report_netstat_diff_v2 'address mask request'
                                report_netstat_diff_v2 'destination unreachable'
                                report_netstat_diff_v2 'echo request'
                                report_netstat_diff_v2 'echo replies'
                                report_netstat_diff_v2 'timestamp replies'
                                report_netstat_diff_v2 'InType0'
                                report_netstat_diff_v2 'InType3'
                                report_netstat_diff_v2 'InType8'
                                report_netstat_diff_v2 'InType11'
                                report_netstat_diff_v2 'InType13'
                                report_netstat_diff_v2 'InType17'
                                report_netstat_diff_v2 'OutType3'
                                report_netstat_diff_v2 'OutType8'
                                report_netstat_diff_v2 'OutType14'
                                report_netstat_diff_v1 'active connections openings'
                                report_netstat_diff_v1 'passive connection openings'
                                report_netstat_diff_v1 'failed connection attempts'
                                report_netstat_diff_v1 'connection resets received'
                                report_netstat_diff_v1 'connections established'
                                report_netstat_diff_v1 'segments received'
                                report_netstat_diff_v1 'segments send out'
                                report_netstat_diff_v1 'segments retransmited'
                                report_netstat_diff_v1 'bad segments received.'
                                report_netstat_diff_v1 'resets sent'
                                report_netstat_diff_v1 'packets received'
                                report_netstat_diff_v1 'packets sent'
                                report_netstat_diff_v1 'packet receive errors'
                                report_netstat_diff_v1 'packets to unknown port received.'
                                report_netstat_diff_v1 'invalid SYN cookies received'
                                report_netstat_diff_v1 'resets received for embryonic SYN_RECV sockets'
                                report_netstat_diff_v1 'packets pruned from receive queue because of socket buffer overrun'
                                report_netstat_diff_v1 'ICMP packets dropped because they were out-of-window'
                                report_netstat_diff_v1 'TCP sockets finished time wait in fast timer'
                                report_netstat_diff_v1 'time wait sockets recycled by time stamp'
                                report_netstat_diff_v1 'packets rejects in established connections because of timestamp'
                                report_netstat_diff_v1 'delayed acks sent'
                                report_netstat_diff_v1 'delayed acks further delayed because of locked socket'
                                report_netstat_diff_v3 'Quick ack mode was activated'    6 'times'
                                report_netstat_diff_v1 'times the listen queue of a socket overflowed'
                                report_netstat_diff_v1 'SYNs to LISTEN sockets ignored'
                                report_netstat_diff_v1 'packets directly queued to recvmsg prequeue.'
                                report_netstat_diff_v1 'packets directly received from backlog'
                                report_netstat_diff_v1 'packets directly received from prequeue'
                                report_netstat_diff_v1 'packets dropped from prequeue'
                                report_netstat_diff_v1 'packets header predicted'
                                report_netstat_diff_v1 'packets header predicted and directly queued to user'
                                report_netstat_diff_v1 'acknowledgments not containing data received'
                                report_netstat_diff_v1 'predicted acknowledgments'
                                report_netstat_diff_v1 'times recovered from packet loss due to fast retransmit'
                                report_netstat_diff_v1 'times recovered from packet loss due to SACK data'
                                report_netstat_diff_v1 'bad SACKs received'
                                report_netstat_diff_v3 'Detected reordering' 3 'times using FACK'
                                report_netstat_diff_v3 'Detected reordering' 3 'times using SACK'
                                report_netstat_diff_v3 'Detected reordering' 3 'times using time stamp'
                                report_netstat_diff_v1 'congestion windows fully recovered'
                                report_netstat_diff_v1 'congestion windows partially recovered using Hoe heuristic'
                                report_netstat_diff_v2 'TCPDSACKUndo'
                                report_netstat_diff_v1 'congestion windows recovered after partial ack'
                                report_netstat_diff_v1 'TCP data loss events'
                                report_netstat_diff_v2 'TCPLostRetransmit'
                                report_netstat_diff_v1 'timeouts after reno fast retransmit'
                                report_netstat_diff_v1 'timeouts after SACK recovery'
                                report_netstat_diff_v1 'timeouts in loss state'
                                report_netstat_diff_v1 'fast retransmits'
                                report_netstat_diff_v1 'forward retransmits'
                                report_netstat_diff_v1 'retransmits in slow start'
                                report_netstat_diff_v1 'other TCP timeouts'
                                report_netstat_diff_v2 'TCPRenoRecoveryFail'
                                report_netstat_diff_v1 'sack retransmits failed'
                                report_netstat_diff_v1 'times receiver scheduled too late for direct processing'
                                report_netstat_diff_v1 'packets collapsed in receive queue due to low socket buffer'
                                report_netstat_diff_v1 'DSACKs sent for old packets'
                                report_netstat_diff_v1 'connections reset due to unexpected SYN'
                                report_netstat_diff_v1 'connections reset due to unexpected data'
                                report_netstat_diff_v1 'connections reset due to early user close'
                                report_netstat_diff_v1 'connections aborted due to timeout'
                                report_netstat_diff_v2 'InMcastPkts'
                                report_netstat_diff_v2 'InBcastPkts'

                # Show any strings that might have been missed in the case-statement above.
                diff $NET_FILE_1 $NET_FILE_2            |
                        grep -v '\-\-\-'                |
                        grep -v '[0-9],[0-9]'           |
                        grep -v '[0-9]c[0-9]'           |
                        grep -v -f $NET_STRING_FILE

        print  "End of Netstat Differential Report."
        printf "%80s\n" " " | tr ' ' '='

        print  "Recent Network stats reported by SAR."
        sar -n DEV | grep IFACE | head -1
        sar -n DEV | grep -v Average | tail -6

        printf "%80s\n" " " | tr ' ' '='
        print  "Connections are seen to the following hosts."
        netstat -nat | awk '{ print $5}' | cut -d: -f1 | sed -e '/^$/d' | sort -u > $IP_FILE

        grep '\.' $IP_FILE |
        while read IP; do
                nslookup $IP 2>&1 | grep name | cut -f2 -d'=' >> $NS_FILE
        sort -u $NS_FILE

        # Cleanup
        rm $NS_FILE
        rm $IP_FILE

        rm $DIR/logs/netstat.*

…End of another blog by the hardworking DBA!
Friday, May 08, 2015 3:00:20 AM
Last year you might have heard about a dust up around PASS voter eligibility. I got pretty vocal about it, but in the end PASS did the right thing. Now PASS is trying to do the right thing again by getting ahead of the issue and notifying the community to check their eligibility. Here’s the […]
Tuesday, May 05, 2015 6:43:45 AM
Tip # 6 – Change History Top 10 Tips for SQL Server Performance and Resiliency This article is part 6 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive. Most common mistake #6: Not Keeping a Change History […]
Tuesday, May 05, 2015 2:00:22 AM
Over the past months, I’ve devoted a lot of time to scripts and tips on using Powershell. While helpful, these articles assume a level of familiarity with Powershell that a lot of SQL Server administrators don’t have. I wanted to take some time to help out those who are looking for a place to start. […]
Monday, May 04, 2015 12:20:02 PM
I'll be doing a new session, "Introduction to Power BI Designer" tomorrow at 11am ET as part of the Pragmatic Works Training on the T's series of FREE webinars! Please register at the link above and see how the new Power BI Designer that is currently in preview works and what I expect it means for the future of BI at Microsoft (I even do a little bit of a look back for those that aren't up on the latest with Power BI).

Looking forward to seeing everyone at the session tomorrow morning. Look for another post later this week with the link to the slides and answering any questions from the session that I don't get to in the session.

Update 5/4 - Thanks to everyone that attended this session on 4/23, below are some of the questions that I didn't get a chance to answer during the webinar.

Can data refresh be scheduled?
With the new Power BI it will depend on which level you subscribe to. For the free version you can schedule refreshes only once a day, but the Pro level will allow you to schedule updates for every hour.

You demo'd the Power BI Designer desktop application, is there an online web-based version?
Yes, they are planning on making a web version available on at the time of release. This is my guess is to why the roll out of features in Power BI Designer has been a bit slower, because they are making sure that all of this code can be run on both the desktop and web applications.

Power BI Designer Preview is only available as a 64bit application, will there be a 32bit version?
No, Microsoft has made it clear that a lot of the issues with the current Power BI Excel add-ons being allowed to run in 32bit. 64bit gives the stability to run the millions/billions of rows of data in the application in memory.

What is the difference between Power Query and Power BI Designer?
Power Query is the add-on that currently runs in Excel only. For Power BI Designer Microsoft is taking much of the functionality from Power Query and integrating it into that tool. In the end Microsoft will probably stop updating Power Query for Excel and make the newest versions only work in Power BI Designer, this is already apparent in some of the SaaS sources that they are adding which are all not available in Power Query.

What data types are supported in Power BI Designer?
The types available are changing with new versions of the Power BI Designer, but as of the April 2015 release, this link provides the full list and some great detail on how to convert between them in DAX (thanks to the attendee that posted this link in the chat).

How is security managed?
Microsoft is saying right now that the full version will support integration with Active Directory, what we have not heard is if that is on-premise and/or Azure. As of right now the only source that fully works with built in security is when you use a SSAS Tabular model and the Power BI Analysis Services Connector Preview. I would assume that in future updates before release that this connector will be upgraded to work with SSAS Multi-Dimensional models as well, but there is no timetable on when that will be. With some of the announcements at Microsoft Ignite today, I can see how the new Azure services for integration with Active Directory will be used to fill in this gap in some way.

Can you clarify what can be done locally at no cost and what?
When you use Power BI Designer you can use all of the sources that it currently connects to as long as you are able to access those sources (for SQL Server you will need to be on the same domain or be able to access them). All of the manipulations of the data that you do to those sources (remove columns, rename columns, merge datasets together) is all stored and is repeatable as it is stored in the M language. This will allow you to update locally in Power BI Designer as much as you want inside the application (that might change when the full version is released). You can build reports on those models that you create, there is no limit to the number of reports you can build (at least currently). Once you have all of that setup the way that you want it can be saved to your local file system as a PBIX file and that file can be shared with others and viewed/updated in the Power BI Designer as long as they also have the required access to the data sources you setup. As far as we know that is all going to be free. It is only when you need to push the solution up to the site that you start getting into being charged and if you need to have the refresh of data scheduled once they are deployed to the web. As far as I know you can pass around PBIX files to anyone and as long as they have Power BI Designer and access to those data sources you can use it for free!

Can the Power BI reports be imported into SharePoint?
Unknown at this time, I don't think that Microsoft is going to forget about allowing this kind of BI/Analytics in SharePoint. I would not be surprised if by the full release they have a way to deploy these PBIX solutions into SharePoint in some way. Or a 3rd party vendor will find a way to parse the XML in the PBIX file and make that happen.

How is it different from SSIS?
Power BI Designer does not have all of the functionality that SSIS has, by design. Power BI Designer is meant for more of Business Analyst/Power User role. I expect over time that more a more functionality will be added to Power BI Designer to the point that it can do 90+% of what SSIS does, so then only the really complex ETL will need to be passed on to IT to do it in SSIS.

Can we connect to SAP DATA and other data providers?
Microsoft is constantly adding new built-in connectors for different databases so it is very possible. If that data source can be connected to via ODBC, that is a built-in provider that could be used as well.

Can you use multiple data sources in the same project?
Definitely! That is one of the major benefits of using this tool is that you can pull data in from as many different sources as you want and then join them together as required. As time goes on they should be able to do better auto-detection of the joins between data sources like Power Query can do today.

Can you import web data and then after making the changes upload to your own SQL Server?
There is no out of the box way to do that type of export in Power BI Designer today, but if you used the current Power Query Excel add-on then you could make the necessary updates to the data from multiple sources and then either import the resulting data via Excel/CSV exports or just upload the model created as an SSAS Tabular model. At this point we don't know if there will be the ability to create the model in Power BI Designer and upload the model into SSAS Tabular or not, but I would not be surprised to see that in the full release.

We are looking at using this for some very confidential data. How can I use this internally and not push anything out to the world?
If you want to share the PBIX files from Power BI Designer, that will work for moving the data/reports around internally. There is no requirement to use the website and that is where the costs come into play as well. Not sure about future plans to integrate with SharePoint, but I would assume that is coming as well.

Thanks again to those that attended this webinar, I will continue to update this post when videos are posted. If you want to replay the entire webinar, just go to this link on the Pragmatic Works site. Thanks also to Pragmatic Works for letting me do this webinar!
Monday, May 04, 2015 10:36:52 AM
Of course I realized after I posted the announcement about speaking at SQLSaturday Atlanta in a couple of weeks that I will also be speaking at the Denver WIT (Women in Technology) meeting on the Wednesday before that, 5/13! The Denver WIT group meets on the 2nd Wednesday of every month at the Denver Microsoft offices and I was more than happy to help out by presenting my new "Power BI is Awesome!" session at this month's meeting.

If you are looking for a good intro into what Power BI is and what it soon will be then this is the session for you! I'll do a little bit of background on what Microsoft has been doing with Power BI over the last few years and where it is all going with the new Power BI Designer and website. There will be demos of the new stuff and some fun guessing where Microsoft might be heading in the future with this product line.

Please RSVP at the EventBrite page and I hope to see you all at the meeting on 5/13!
Monday, May 04, 2015 10:19:18 AM
I'm very happy that I was selected for the 2nd year in a row to speak at the SQLSaturday event in Atlanta on 5/16! This is one of, if not the biggest SQLSaturday's in the world with over 500 attendees! If you are going to be in the Atlanta area on 5/16, please register and plan on attending my session, "Near Real-Time Data Warehouse Updates with CDC & SSIS" at 2:30pm in BI Room 3 (keep you eyes on the schedule page or use Guidebook in case the schedule changes).

Also, if you are going to be around on the Friday before, consider registering for one of the all day pre-cons. These pre-cons are just like the ones that you would get at the large conferences like PASS Summit, but at 1/3rd the price!

Looking forward to meeting and seeing everyone in Atlanta in less than 2 weeks!
Thursday, April 30, 2015 4:00:59 AM
SQL Saturday Atlanta is coming up in two weeks and I’ll be giving my Introduction to Powershell for SQL Server DBAs precon the day before the event. I’m extremely excited to offer this precon and get more SQL Server professionals exposed to Powershell. Think you might be interested? Here’s the synopsis: Maybe you’ve only heard […]
Tuesday, April 21, 2015 3:00:09 AM
My current gig has me working a lot with Availability Groups. I love the options they give me with their ability to failover fast and replicate my data to other servers. It is a lot of peace of mind for high availability and data protection. The price of all this comes from a much more […]
Page 1 of 3 1 2 3 > >>