Monday, November 30, 2020

Podcast.__init__: Open Sourcing The Anvil Full Stack Python Web App Platform - Episode 291

Building a complete web application requires expertise in a wide range of disciplines. As a result it is often the work of a whole team of engineers to get a new project from idea to production. Meredydd Luff and his co-founder built the Anvil platform to make it possible to build full stack applications entirely in Python. In this episode he explains why they released the application server as open source, how you can use it to run your own projects for free, and why developer tooling is the sweet spot for an open source business model. He also shares his vision for how the end-to-end experience of building for the web should look, and some of the innovative projects and companies that were made possible by the reduced friction that the Anvil platform provides. Give it a listen today to gain some perspective on what it could be like to build a web app.

Summary

Building a complete web application requires expertise in a wide range of disciplines. As a result it is often the work of a whole team of engineers to get a new project from idea to production. Meredydd Luff and his co-founder built the Anvil platform to make it possible to build full stack applications entirely in Python. In this episode he explains why they released the application server as open source, how you can use it to run your own projects for free, and why developer tooling is the sweet spot for an open source business model. He also shares his vision for how the end-to-end experience of building for the web should look, and some of the innovative projects and companies that were made possible by the reduced friction that the Anvil platform provides. Give it a listen today to gain some perspective on what it could be like to build a web app.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Do you want to get better at Python? Now is an excellent time to take an online course. Whether you’re just learning Python or you’re looking for deep dives on topics like APIs, memory mangement, async and await, and more, our friends at Talk Python Training have a top-notch course for you. If you’re just getting started, be sure to check out the Python for Absolute Beginners course. It’s like the first year of computer science that you never took compressed into 10 fun hours of Python coding and problem solving. Go to pythonpodcast.com/talkpython today and get 10% off the course that will help you find your next level. That’s pythonpodcast.com/talkpython, and don’t forget to thank them for supporting the show.
  • Python has become the default language for working with data, whether as a data scientist, data engineer, data analyst, or machine learning engineer. Springboard has launched their School of Data to help you get a career in the field through a comprehensive set of programs that are 100% online and tailored to fit your busy schedule. With a network of expert mentors who are available to coach you during weekly 1:1 video calls, a tuition-back guarantee that means you don’t pay until you get a job, resume preparation, and interview assistance there’s no reason to wait. Springboard is offering up to 20 scholarships of $500 towards the tuition cost, exclusively to listeners of this show. Go to pythonpodcast.com/springboard today to learn more and give your career a boost to the next level.
  • Your host as usual is Tobias Macey and today I’m interviewing Meredydd Luff about the process and motivations for releasing the Anvil platform as open source

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you start by giving an overview of what Anvil is and some of the story behind it?
    • What is new or different in Anvil since we last spoke in June of 2019?
  • What are the most common or most impressive use cases for Anvil that you have seen?
    • On your website you mention Anvil being used for deploying models and productionizing notebooks. How does Anvil help in those use cases?
  • How much of the adoption of Anvil do you attribute to the use of Skulpt and providing a way to write Python for the browser?
    • What are some of the complications that users might run into when trying to integrate with the broader Javascript ecosystem?
  • How does the release of the Anvil App Server affect your business model?
    • How does the workflow for users of the Anvil platform change if they decide to run their own instance?
    • What is involved in getting it deployed to production?
  • What other tools or companies did you look to for positive and negative examples of how to run a successful business based on open source?
  • What was your motivation for open sourcing the core runtime of Anvil?
    • What was involved in getting the code cleaned up and ready for a public release?
  • What are the other ways that your business relies on or contributes to the open source ecosystem?
  • What do you see as the primary threats to open source business models?
  • What are some of the most interesting, unexpected, or challenging lessons that you have learned while building and growing Anvil?
  • What do you have planned for the future of the platform and business?

Keep In Touch

Picks

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at pythonpodcast.com/chat

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA



from Planet Python
via read more

Stefan Scherfke: Typed Settings

There are already several settings libraries like Dynaconf, Environ Config, or Pydantic – just to name a few. I have written a new one: Typed Settings.

What makes it different?

Settings are defined as type-hinted, immutable (frozen) attrs classes. Values are automatically converted to the proper type when they are loaded. Apart from simple data types, Typed Settings supports datetimes, enums, nested attrs classes and various container types (like lists). The auto-converter can be extended to handle additional types.

Settings can be loaded from multiple config files. Config files can contain settings for multiple apps (like pyproject.toml). Different deployment environments use different config files (this is in contrast to Dynaconf, where a single config file specifies the settings for all environments in different sections). Currently, only TOML is supported. Support for YAML or .env may follow later.

Value interpolation as in Dynaconf is not yet supported, but planned.

Search paths for config files have to be explicitly stated – either directly in the app or via an environment variable.

Environment variables can also be used to override settings. The prefix is customizable and this feature can also be disabled.

Finally, Typed Settings can generate Click options for command line applications. You CLI function will receive all options nicely packed together as a single instance of your settings class.

Specialized secrets stores like HashiCorp Vault are not (yet?) supported.

Invalid values or undefined options in config files raise an error instead of being silently ignored. Config files can optionally be marked as mandatory and an error will be raised if such a file cannot be found.

To aid with debugging, Typed Settings uses Python’s logging module to log config files that are being loaded (or that cannot be found) as well as looked up env vars.

Everything is thoroughly tested, the test coverage is at 100%.

An Example

Here is a very simple example that demonstrates how you can load settings from a statically defined config file and from environment variables:

# example.py
import typed_settings as ts

@ts.settings
class Settings:
    option_one: str
    option_two: int

settings = ts.load_settings(
    cls=Settings,
    appname="example",
    config_files=["settings.py"],  # Paths can also be set via env var
)
print(settings)
# settings.toml
[example]
option_one = "value"
$ EXAMPLE_OPTION_TWO=2 python example.py
Settings(option_one="value", option_two=2)

The README and documentation contain more examples.

Project Status

The recently released version 0.9 contains all features that are planed for version 1.0.0. Additional features are already on the roadmap.

What’s missing for the first stable release is mainly documentation as well as more real life testing. I already use Typed Settings for a few projects in our company and will perspectively try to replace our old settings system with it.



from Planet Python
via read more

Ian Ozsvald: Skinny Pandas Riding on a Rocket at PyDataGlobal 2020

On November 11th we saw the most ambitious ever PyData conference – PyData Global 2020 was a combination of world-wide PyData groups putting on a huge event to both build our international community and to leverage the on-line only conferences that we need to run during Covid 19.

The conference brought together almost 2,000 attendees over 5 days on a 5-track schedule. All speaker videos had to be uploaded in advance so they could be checked and then provided ahead-of-time to attendees. You can see the full program here, the topic list was very solid since the selection committee had the best of the international community uploading their proposals.

The volunteer organising committee felt that giving attendees a chance to watch all the speakers at their leisure took away constraints of time zones – but we wanted to avoid the common end result of “watching a webinar” that has plagued many other conferences this year. Our solution included timed (and repeated) “watch parties” so you could gather to watch the video simultaneously with others, and then share discussion in chat rooms. The volunteer organising committee also worked hard to build a “virtual 2D world” with Gather.town – you walk around a virtual conference space (including the speakers’ rooms, an expo hall, parks, a bar, a helpdesk and more). Volunteer Jesper Dramsch made a very cool virtual tour of “how you can attend PyData Global” which has a great demo of how Gather works – it is worth a quick watch. Other conferences should take note.

Through Gather you could “attend” the keynote and speaker rooms during a watch-party and actually see other attendees around you, you could talk to them and you could watch the video being played. You genuinely got a sense that you were attending an event with others, that’s the first time I’ve really felt that in 2020 and I’ve presented at 7 events this year prior to PyDataGlobal (and frankly some of those other events felt pretty lonely – presenting to a blank screen and getting no feedback…that’s not very fulfilling!).

I spoke on “Skinny Pandas Riding on a Rocket” – a culmination of ideas covered in earlier talks with a focus on getting more into Pandas so you don’t have to learn new technologies and see Vaex, Dask and SQLite in action if you do need to scale up your Pythonic data science.

I also organised another “Executives at PyData” session aimed at getting decision makers and team leaders into a (virtual) room for an hour to discuss pressing issues. Given 6 iterations of my “Successful Data Science Projects” training course in London over the last 1.5 years I know of many issues that repeatedly come up that plague decision makers on data science teams. We got to cover a set of issues and talk on solutions that are known to work. I have a fuller write-up to follow.

The conference also enabled a “pay what you can” model for those attending outside of a corporate ticket, this brought in a much wider audience that could normally attend a PyData conference. The goal of the non-profit NumFOCUS (who back the PyData global events) is to fund open source so the goal is always to raise more money and to provide a high quality educational and networking experience. For this on-line global event we figured it made sense to open out the community to even more folk – the “pay what you can” model is regarded as a success (this is the first time we’ve done it!) and has given us some interesting attendee insights to think on.

I extend my thanks to the wider volunteer organising committee and to NumFOCUS for making this happen!


Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.

The post Skinny Pandas Riding on a Rocket at PyDataGlobal 2020 appeared first on Entrepreneurial Geekiness.



from Planet Python
via read more

Python Morsels: Keyword-Only Function Arguments

Related article:

Transcript:

Let's define a function that accepts a keyword-only argument.

Accepting Multiple Positional Argument

This greet function accepts any number of positional arguments:

>>> def greet(*names):
...     for name in names:
...         print("Hello", name)
...

If we give it some names, it's going to print out Hello, and then the name, for each of those names:

>>> greet("Trey", "Jo", "Ian")
Hello Trey
Hello Jo
Hello Ian

It does this through the * operator, which is capturing all the positional arguments given to this function.

Positional and Keyword-Only Argument

If we wanted to allow the greeting (Hello) to be customized we could accept a greeting argument:

>>> def greet(*names, greeting):
...     for name in names:
...         print(greeting, name)
...

We might try to call this new greet function like this:

>>> greet("Trey", "Hi")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: greet() missing 1 required keyword-only argument: 'greeting'

But that gives us an error. The error says that greet is missing one required keyword-only argument greeting.

That error is saying is that greeting is a required argument because it doesn't have a default value and it must be specified as a keyword argument when we call this function.

So if we want to customize greeting, we can pass it in as a keyword argument:

>>> greet("Trey", greeting="Hi")
Hi Trey
>>> greet("Trey", greeting="Hello")
Hello Trey

We probably want greeting to actually have a default value of Hello. We can do that by specifying a default value for the greeting argument:

>>> def greet(*names, greeting="Hello"):
...     for name in names:
...         print(greeting, name)
...
>>> greet("Trey", "Jo")
Hello Trey
Hello Jo

Because greeting is after that *names in our function definition, Python sees greeting as a keyword-only argument: an argument that can only be provided as a keyword argument when this function is called.

It can only be given by its name like this:

>>> greet("Trey", "Jo", greeting="Hi")
Hi Trey
Hi Jo

Keyword-Only Arguments in Built-in Functions

This is actually something you'll see in some of Python's built-in functions. For example, the print function accepts any number of positional arguments, as well as four optional keyword-only arguments: sep, end, file, and flush:

>>> help(print)
Help on built-in function print in module builtins:

print(...)
    print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)

Note that the documentation for print doesn't use the * syntax, but that ... is print's way of indicating that it accepts any number of values and then all of the arguments after that must be keyword arguments.

If we look at the documentation for greet, you'll see how keyword-only arguments usually show up in documentation:

>>> help(greet)
Help on function greet in module __main__:

greet(*names, greeting='Hello')

Everything after that * (greeting in this case), can only be specified as a keyword argument.

Keyword-Only Arguments without Capturing All Positional Arguments

It is also possible to make a function that doesn't capture any number of positional arguments, but does have some keyword-only arguments. The syntax for this is really weird.

Let's make a multiply function that accepts x and y arguments:

>>> def multiply(*, x, y):
...     return x * y
...

That lone * before x and y means that they must be specified as keyword arguments.

So, if we were to try to call multiply with two positional arguments, we'll get an error:

>>> multiply(1, 2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: multiply() takes 0 positional arguments but 2 were given

To call this function, we have to specify x and y as keyword arguments:

>>> multiply(x=1, y=2)
2

If we call this function with nothing you'll see an error message similar to what we saw before about required keyword-only arguments:

>>> multiply()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: multiply() missing 2 required keyword-only arguments: 'x' and 'y'

Keyword-Only Arguments in the Standard Library

You'll actually sometimes see this * thing on its own within the Python standard library. For example in thechown function in the os module (used for changing the ownership of a file) uses the a lone * to specify keyword-only arguments:

chown(path, uid, gid, *, dir_fd=None, follow_symlinks=True)
    Change the owner and group id of path to the numeric uid and gid.

The chown function documentation shows path, uid, gid, and then a * (which isn't an argument itself), and then dir_fd and follow_symlinks. That lone * is a way of noting that everything after that point is a keyword-only argument.

The last two arguments, dir_fd and follow_symlinks can only be specified by their name when the chown function is called.

Summary

So, whenever you see a function that uses * to capture any number of positional arguments (e.g. *args in the function definition), note that any arguments defined after that * capturing can only be specified as a keyword argument (they're keyword-only arguments).

Also if you see a function that has an * on its own with a comma after it, that means that every argument after that point, is a keyword only argument it must be specified by its name when that function is called.



from Planet Python
via read more

Stack Abuse: Matplotlib Bar Plot - Tutorial and Examples

Introduction

Matplotlib is one of the most widely used data visualization libraries in Python. From simple to complex visualizations, it's the go-to library for most.

In this tutorial, we'll take a look at how to plot a bar plot in Matplotlib.

Bar graphs display numerical quantities on one axis and categorical variables on the other, letting you see how many occurrences there are for the different categories.

Bar charts can be used for visualizing a time series, as well as just categorical data.

Plot a Bar Plot in Matplotlib

Plotting a Bar Plot in Matplotlib is as easy as calling the bar() function on the PyPlot instance, and passing in the categorical and continuous variables that we'd like to visualize.

import matplotlib.pyplot as plt

x = ['A', 'B', 'C']
y = [1, 5, 3]

plt.bar(x, y)
plt.show()

Here, we've got a few categorical variables in a list - A, B and C. We've also got a couple of continuous variables in another list - 1, 5 and 3. The relationship between these two is then visualized in a Bar Plot by passing these two lists to plt.bar().

This results in a clean and simple bar graph:

basic bar plot in matplotlib

Plot a Horizontal Bar Plot in Matplotlib

Oftentimes, we might want to plot a Bar Plot horizontally, instead of vertically. This is easily achieveable by switching the plt.bar() call with the plt.barh() call:

import matplotlib.pyplot as plt

x = ['A', 'B', 'C']
y = [1, 5, 3]

plt.barh(x, y)
plt.show()

This results in a horizontally-oriented Bar Plot:

horizontal bar plot in matplotlib

Change Bar Plot Color in Matplotlib

Changing the color of the bars themselves is as easy as setting the color argument with a list of colors. If you have more bars than colors in the list, they'll start being applied from the first color again:

import matplotlib.pyplot as plt

x = ['A', 'B', 'C']
y = [1, 5, 3]

plt.bar(x, y, color=['red', 'blue', 'green'])
plt.show()

Now, we've got a nicely colored Bar Plot:

change bar plot color in matplotlib

Of course, you can also use the shorthand versions or even HTML codes:

plt.bar(x, y, color=['red', 'blue', 'green'])
plt.bar(x, y, color=['r', 'b', 'g'])
plt.bar(x, y, color=['#ff0000', '#00ff00', '#0000ff'])
plt.show()

Or you can even put a single scalar value, to apply it to all bars:

plt.bar(x, y, color='green')

change bar plot color in matplotlib

Bar Plot with Error Bars in Matplotlib

When you're plotting mean values of lists, which is a common application for Bar Plots, you'll have some error space. It's very useful to plot error bars to let other observers, and yourself, know how truthful these means are and which deviation is expected.

For this, let's make a dataset with some values, calculate their means and standard deviations with Numpy and plot them with error bars:

import matplotlib.pyplot as plt
import numpy as np

x = np.array([4, 5, 6, 3, 6, 5, 7, 3, 4, 5])
y = np.array([3, 4, 1, 3, 2, 3, 3, 1, 2, 3])
z = np.array([6, 9, 8, 7, 9, 8, 9, 6, 8, 7])

x_mean = np.mean(x)
y_mean = np.mean(y)
z_mean = np.mean(z)

x_deviation = np.std(x)
y_deviation = np.std(y)
z_deviation = np.std(z)

bars = [x_mean, y_mean, z_mean]
bar_categories = ['X', 'Y', 'Z']
error_bars = [x_deviation, y_deviation, z_deviation]

plt.bar(bar_categories, bars, yerr=error_bars)
plt.show()

Here, we've created three fake datasets with several values each. We'll visualize the mean values of each of these lists. However, since means, as well as averages can give the false sense of accuracy, we'll also calculate the standard deviation of these datasets so that we can add those as error bars.

Using Numpy's mean() and std() functions, this is a breeze. Then, we've packed the bar values into a bars list, the bar names for a nice user experience into bar_categories and finally - the standard deviation values into an error_bars list.

To visualize this, we call the regular bar() function, passing in the bar_categories (categorical values) and bars (continuous values), alongside the yerr argument.

Since we're plotting vertically, we're using the yerr arguement. If we were plotting horizontally, we'd use the xerr argument. Here, we've provided the information about the error bars.

This ultimately results in:

bar plot with error bars in matplotlib

Plot Stacked Bar Plot in Matplotlib

Finally, let's plot a Stacked Bar Plot. Stacked Bar Plots are really useful if you have groups of variables, but instead of plotting them one next to the other, you'd like to plot them one on top of the other.

For this, we'll again have groups of data. Then, we'll calculate their standard deviation for error bars.

Finally, we'll need an index range to plot these variables on top of each other, while maintaining their relative order. This index will essentially be a range of numbers the length of all the groups we've got.

To stack a bar on another one, you use the bottom argument. You specify what's on the bottom of that bar. To plot x beneath y, you'd set x as the bottom of y.

For more than one group, you'll want to add the values together before plotting, otherwise, the Bar Plot won't add up. We'll use Numpy's np.add().tolist() to add the elements of two lists and produce a list back:

import matplotlib.pyplot as plt
import numpy as np

# Groups of data, first values are plotted on top of each other
# Second values are plotted on top of each other, etc
x = [1, 3, 2]
y = [2, 3, 3]
z = [7, 6, 8]

# Standard deviation rates for error bars
x_deviation = np.std(x)
y_deviation = np.std(y)
z_deviation = np.std(z)

bars = [x, y, z]
ind = np.arange(len(bars))
bar_categories = ['X', 'Y', 'Z'];
bar_width = 0.5
bar_padding = np.add(x, y).tolist()


plt.bar(ind, x, yerr=x_deviation, width=bar_width)
plt.bar(ind, y, yerr=y_deviation, bottom=x, width=bar_width)
plt.bar(ind, z, yerr=z_deviation, bottom=bar_padding, width=bar_width)

plt.xticks(ind, bar_categories)
plt.xlabel("Stacked Bar Plot")

plt.show()

Running this code results in:

stacked bar plot in matplotlib

Conclusion

In this tutorial, we've gone over several ways to plot a bar plot using Matplotlib and Python. We've also covered how to calculate and add error bars, as well as stack bars on top of each other.

If you're interested in Data Visualization and don't know where to start, make sure to check out our book on Data Visualization in Python.

Data Visualization in Python, a book for beginner to intermediate Python developers, will guide you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair.



from Planet Python
via read more

Real Python: np.linspace(): Create Evenly or Non-Evenly Spaced Arrays

When you’re working with numerical applications using NumPy, you often need to create an array of numbers. In many cases you want the numbers to be evenly spaced, but there are also times when you may need non-evenly spaced numbers. One of the key tools you can use in both situations is np.linspace().

In its basic form, np.linspace() can seem relatively straightforward to use. However, it’s an essential part of the numerical programming toolkit. It’s both very versatile and powerful. In this tutorial, you’ll find out how to use this function effectively.

In this tutorial, you’ll learn how to:

  • Create an evenly or non-evenly spaced range of numbers
  • Decide when to use np.linspace() instead of alternative tools
  • Use the required and optional input parameters
  • Create arrays with two or more dimensions
  • Represent mathematical functions in discrete form

This tutorial assumes you’re already familiar with the basics of NumPy and the ndarray data type. You’ll start by learning about various ways of creating a range of numbers in Python. Then you’ll take a closer look at all the ways of using np.linspace() and how you can use it effectively in your programs.

Free Bonus: Click here to get access to a free NumPy Resources Guide that points you to the best tutorials, videos, and books for improving your NumPy skills.

Creating Ranges of Numbers With Even Spacing

There are several ways in which you can create a range of evenly spaced numbers in Python. np.linspace() allows you to do this and to customize the range to fit your specific needs, but it’s not the only way to create a range of numbers. In the next section, you’ll learn how to use np.linspace() before comparing it with other ways of creating ranges of evenly spaced numbers.

Using np.linspace()

np.linspace() has two required parameters, start and stop, which you can use to set the beginning and end of the range:

>>>
>>> import numpy as np
>>> np.linspace(1, 10)
array([ 1.        ,  1.18367347,  1.36734694,  1.55102041,  1.73469388,
        1.91836735,  2.10204082,  2.28571429,  2.46938776,  2.65306122,
        2.83673469,  3.02040816,  3.20408163,  3.3877551 ,  3.57142857,
        3.75510204,  3.93877551,  4.12244898,  4.30612245,  4.48979592,
        4.67346939,  4.85714286,  5.04081633,  5.2244898 ,  5.40816327,
        5.59183673,  5.7755102 ,  5.95918367,  6.14285714,  6.32653061,
        6.51020408,  6.69387755,  6.87755102,  7.06122449,  7.24489796,
        7.42857143,  7.6122449 ,  7.79591837,  7.97959184,  8.16326531,
        8.34693878,  8.53061224,  8.71428571,  8.89795918,  9.08163265,
        9.26530612,  9.44897959,  9.63265306,  9.81632653, 10.        ])

This code returns an ndarray with equally spaced intervals between the start and stop values. This is a vector space, also called a linear space, which is where the name linspace comes from.

Note that the value 10 is included in the output array. The function returns a closed range, one that includes the endpoint, by default. This is contrary to what you might expect from Python, in which the end of a range usually isn’t included. This break with convention isn’t an oversight. You’ll see later on that this is usually what you want when using this function.

The array in the example above is of length 50, which is the default number. In most cases, you’ll want to set your own number of values in the array. You can do so with the optional parameter num:

>>>
>>> np.linspace(1, 10, num=10)
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])

The output array in this instance contains 10 equally spaced values between 1 and 10, which is just the numbers from 1 to 10. Here’s another example:

>>>
>>> np.linspace(-10, 10, 25)
array([-10.        ,  -9.16666667,  -8.33333333,  -7.5       ,
        -6.66666667,  -5.83333333,  -5.        ,  -4.16666667,
        -3.33333333,  -2.5       ,  -1.66666667,  -0.83333333,
         0.        ,   0.83333333,   1.66666667,   2.5       ,
         3.33333333,   4.16666667,   5.        ,   5.83333333,
         6.66666667,   7.5       ,   8.33333333,   9.16666667,
        10.        ])

In the example above, you create a linear space with 25 values between -10 and 10. You use the num parameter as a positional argument, without explicitly mentioning its name in the function call. This is the form you’re likely to use most often.

Using range() and List Comprehensions

Let’s take a step back and look at what other tools you could use to create an evenly spaced range of numbers. The most straightforward option that Python offers is the built-in range(). The function call range(10) returns an object that produces the sequence from 0 to 9, which is an evenly spaced range of numbers.

For many numerical applications, the fact that range() is limited to integers is too restrictive. Of the examples shown above, only np.linspace(1, 10, 10) can be accomplished with range():

>>>
>>> list(range(1, 11))
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

The values returned by range(), when converted explicitly into a list, are the same as those returned by the NumPy version, except that they’re integers instead of floats.

You can still use range() with list comprehensions to create non-integer ranges:

>>>
>>> step = 20 / 24  # Divide the range into 24 intervals
>>> [-10 + step*interval for interval in range(25)]
[-10.0, -9.166666666666666, -8.333333333333334, -7.5,
 -6.666666666666666, -5.833333333333333, -5.0, -4.166666666666666,
 -3.333333333333333, -2.5, -1.666666666666666, -0.8333333333333321,
 0.0, 0.8333333333333339, 1.6666666666666679, 2.5,
 3.333333333333334, 4.166666666666668, 5.0, 5.833333333333334,
 6.666666666666668, 7.5, 8.333333333333336, 9.166666666666668, 10.0]

Read the full article at https://realpython.com/np-linspace-numpy/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]



from Planet Python
via read more

Matthew Wright: Removing duplicate data in Pandas

It can be very common when dealing with time series data to end up with duplicate data. This can happen for a variety of reasons, and I've encountered it more than one time when and tried different approaches to eliminate the duplicate values. There's a gem of a solution on Stack Overflow and I thought … Continue reading Removing duplicate data in Pandas



from Planet Python
via read more

Python Pool: Matplotlib Table in Python With Examples

Hello programmers, today, we will learn about the implementation of Matplotlib tables in Python. The matplotlib.pyplot.table() method is used to create or add a table to axes in python programs. It generates a table used as an extension to a stacked bar chart. Before we move on with various examples and formatting of tables, let me just brief you about the syntax and return type of the Matplotlib table function.

Syntax of Matplotlib Table:

matplotlib.pyplot.table(cellText=None, cellColours=None, cellLoc='right', colWidths=None, rowLabels=None, rowColours=None, rowLoc='left', colLabels=None, colColours=None, colLoc='center', loc='bottom', bbox=None, edges='closed', **kwargs)

Specifying one of cellText or cellColours as a parameter to the matplotlib table function is mandatory. These parameters must be 2D lists, in which the outer lists define the rows and the inner list define the column values of each row.

The table can optionally have row and column headers. These optional parameters are configured using rowLabels, rowColours, rowLoc, and colLabels, colColours, colLoc respectively. **kwargs are optional parameters as well.

Parameters:

  • cellText: The texts to place into the table cells.
  • cellColours: The background colors of the cells.
  • cellLoc: The alignment of the text within the cells. (default: ‘right’)
  • colWidths: The column widths in units of the axes.
  • rowLabels: The text of the row header cells.
  • rowColours: The colors of the row header cells.
  • rowLoc: The text alignment of the row header cells. (default: ‘left’)
  • colLabels: The text of the column header cells.
  • colColours: The colors of the column header cells.
  • colLoc: The text alignment of the column header cells.(default: ‘left’)
  • Loc: This parameter is the position of the cell with respect to ax.
  • bbox: This parameter is the bounding box to draw the table into.
  • edges: This parameter is the cell edges to be drawn with a line.
  • **kwargs: Used to control table properties.

Return type :

The matplotlib.pylot.table() method returns the table created by passing the required condition as parameters.

Implementation of Matplotlib table

import matplotlib.pyplot as plt 
  
val1 = ["{:X}".format(i) for i in range(10)] 
val2 = ["{:02X}".format(10 * i) for i in range(10)] 
val3 = [["" for c in range(10)] for r in range(10)] 
  
fig, ax = plt.subplots() 
ax.set_axis_off() 
table = ax.table( 
    cellText = val3,  
    rowLabels = val2,  
    colLabels = val1, 
    rowColours =["palegreen"] * 10,  
    colColours =["palegreen"] * 10, 
    cellLoc ='center',  
    loc ='upper left')         
  
ax.set_title('matplotlib.axes.Axes.table() function Example', 
             fontweight ="bold") 
  
plt.show() 

Output:

Implementation of Matplotlib table

Explanation:

In the above example, desired parameters are passed as arguments to the matplotlib.pyplot.table() to create the table. The parameters passed are as follows:- cellText = val3, rowLabels = val2, colLabels = val1, rowColours =[“palegreen”] * 10, colColours =[“palegreen”] * 10, cellLoc =’center’, loc =’upper left’. Val1, val2 and val3 runs for loops each to generate values for column lables, row lables and cell text, respectively. Setting rowColours and colColours to ‘palegreen’ changes the color of row and column header cells to palegreen. Setting cellLoc =’center’ sets the alignment of header row and column values to center. And finally, loc =’upper left’ sets the alignment of the Title to upper left.

Style Row and Column Headers

rcolors = plt.cm.BuPu(np.full(len(row_headers), 0.1))
ccolors = plt.cm.BuPu(np.full(len(column_headers), 0.1))

the_table = plt.table(cellText=cell_text,
                      rowLabels=row_headers,
                      rowColours=rcolors,
                      rowLoc='right',
                      colColours=ccolors,
                      colLabels=column_headers,
                      loc='center')

Output:

Style Row and Column Headers

Explanation:

The above code snippet is used to style row and column headers.lists of colors, one for every row header cell and another for every column header cell. Row header horizontal alignment is set to right. We’ll just use the plt.cm.BuPm color map to fill two lists of colors, one for every row header cell and another for every column header cell. There are better choices than a linear color map for decorative colors though.

Styling the Matplotlib Table in Python

fig_background_color = 'skyblue'
fig_border = 'steelblue'

plt.figure(linewidth=2,
           edgecolor=fig_border,
           facecolor=fig_background_color
          )

plt.savefig('pyplot-table-figure-style.png',
            bbox_inches='tight',
            edgecolor=fig.get_edgecolor(),
            facecolor=fig.get_facecolor(),
            dpi=150
            )

Output:

Styling the Matplotlib Table in Python

Explanation:

We can explicitly declare the figure to get easy control of its border and background color. In the above code snippet, the background color is set to sky blue. And the figure border is set to steel blue.

Conclusion

This article brings to you very simple and brief concepts of Matplotlib tables in python. It includes ways of inserting tables in your python program in a very neat manner. Methods to style the table and row nd column headers are so explicitly discussed here. Refer to this article for any queries related to tables.

However, if you have any doubts or questions do let me know in the comment section below. I will try to help you as soon as possible.

Happy Pythoning!

The post Matplotlib Table in Python With Examples appeared first on Python Pool.



from Planet Python
via read more

Graph Neural Network and Some of GNN Applications – Everything You Need to Know

The recent success of neural networks has boosted research on pattern recognition and data mining.  Machine learning tasks, like object detection, machine...

The post Graph Neural Network and Some of GNN Applications – Everything You Need to Know appeared first on neptune.ai.



from Planet SciPy
read more

np.linspace(): Create Evenly or Non-Evenly Spaced Arrays

When you’re working with numerical applications using NumPy, you often need to create an array of numbers. In many cases you want the numbers to be evenly spaced, but there are also times when you may need non-evenly spaced numbers. One of the key tools you can use in both situations is np.linspace().

In its basic form, np.linspace() can seem relatively straightforward to use. However, it’s an essential part of the numerical programming toolkit. It’s both very versatile and powerful. In this tutorial, you’ll find out how to use this function effectively.

In this tutorial, you’ll learn how to:

  • Create an evenly or non-evenly spaced range of numbers
  • Decide when to use np.linspace() instead of alternative tools
  • Use the required and optional input parameters
  • Create arrays with two or more dimensions
  • Represent mathematical functions in discrete form

This tutorial assumes you’re already familiar with the basics of NumPy and the ndarray data type. You’ll start by learning about various ways of creating a range of numbers in Python. Then you’ll take a closer look at all the ways of using np.linspace() and how you can use it effectively in your programs.

Creating Ranges of Numbers With Even Spacing

There are several ways in which you can create a range of evenly spaced numbers in Python. np.linspace() allows you to do this and to customize the range to fit your specific needs, but it’s not the only way to create a range of numbers. In the next section, you’ll learn how to use np.linspace() before comparing it with other ways of creating ranges of evenly spaced numbers.

Using np.linspace()

np.linspace() has two required parameters, start and stop, which you can use to set the beginning and end of the range:

>>>
>>> import numpy as np
>>> np.linspace(1, 10)
array([ 1.        ,  1.18367347,  1.36734694,  1.55102041,  1.73469388,
        1.91836735,  2.10204082,  2.28571429,  2.46938776,  2.65306122,
        2.83673469,  3.02040816,  3.20408163,  3.3877551 ,  3.57142857,
        3.75510204,  3.93877551,  4.12244898,  4.30612245,  4.48979592,
        4.67346939,  4.85714286,  5.04081633,  5.2244898 ,  5.40816327,
        5.59183673,  5.7755102 ,  5.95918367,  6.14285714,  6.32653061,
        6.51020408,  6.69387755,  6.87755102,  7.06122449,  7.24489796,
        7.42857143,  7.6122449 ,  7.79591837,  7.97959184,  8.16326531,
        8.34693878,  8.53061224,  8.71428571,  8.89795918,  9.08163265,
        9.26530612,  9.44897959,  9.63265306,  9.81632653, 10.        ])

This code returns an ndarray with equally spaced intervals between the start and stop values. This is a vector space, also called a linear space, which is where the name linspace comes from.

Note that the value 10 is included in the output array. The function returns a closed range, one that includes the endpoint, by default. This is contrary to what you might expect from Python, in which the end of a range usually isn’t included. This break with convention isn’t an oversight. You’ll see later on that this is usually what you want when using this function.

The array in the example above is of length 50, which is the default number. In most cases, you’ll want to set your own number of values in the array. You can do so with the optional parameter num:

>>>
>>> np.linspace(1, 10, num=10)
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])

The output array in this instance contains 10 equally spaced values between 1 and 10, which is just the numbers from 1 to 10. Here’s another example:

>>>
>>> np.linspace(-10, 10, 25)
array([-10.        ,  -9.16666667,  -8.33333333,  -7.5       ,
        -6.66666667,  -5.83333333,  -5.        ,  -4.16666667,
        -3.33333333,  -2.5       ,  -1.66666667,  -0.83333333,
         0.        ,   0.83333333,   1.66666667,   2.5       ,
         3.33333333,   4.16666667,   5.        ,   5.83333333,
         6.66666667,   7.5       ,   8.33333333,   9.16666667,
        10.        ])

In the example above, you create a linear space with 25 values between -10 and 10. You use the num parameter as a positional argument, without explicitly mentioning its name in the function call. This is the form you’re likely to use most often.

Using range() and List Comprehensions

Let’s take a step back and look at what other tools you could use to create an evenly spaced range of numbers. The most straightforward option that Python offers is the built-in range(). The function call range(10) returns an object that produces the sequence from 0 to 9, which is an evenly spaced range of numbers.

For many numerical applications, the fact that range() is limited to integers is too restrictive. Of the examples shown above, only np.linspace(1, 10, 10) can be accomplished with range():

>>>
>>> list(range(1, 11))
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

The values returned by range(), when converted explicitly into a list, are the same as those returned by the NumPy version, except that they’re integers instead of floats.

You can still use range() with list comprehensions to create non-integer ranges:

>>>
>>> step = 20 / 24  # Divide the range into 24 intervals
>>> [-10 + step*interval for interval in range(25)]
[-10.0, -9.166666666666666, -8.333333333333334, -7.5,
 -6.666666666666666, -5.833333333333333, -5.0, -4.166666666666666,
 -3.333333333333333, -2.5, -1.666666666666666, -0.8333333333333321,
 0.0, 0.8333333333333339, 1.6666666666666679, 2.5,
 3.333333333333334, 4.166666666666668, 5.0, 5.833333333333334,
 6.666666666666668, 7.5, 8.333333333333336, 9.166666666666668, 10.0]

Read the full article at https://realpython.com/np-linspace-numpy/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]



from Real Python
read more

Zato Blog: Service-oriented API task scheduling

An integral part of Zato, its scalable, service-oriented scheduler makes it is possible to execute high-level API integration processes as background tasks. The scheduler runs periodic jobs which in turn trigger services and services are what is used to integrate systems.

Integration process

In this article we will check how to use the scheduler with three kinds of jobs, one-time, interval-based and Cron-style ones.

Sample integration process

What we want to achieve is a sample yet fairly common use-case:

  • Periodically consult a remote REST endpoint for new data
  • Store data found in Redis
  • Push data found as an e-mail attachment

Instead of, or in addition to, Redis or e-mail, we could use SQL and SMS, or MongoDB and AMQP or anything else - Redis and e-mail are just example technologies frequently used in data synchronisation processes that we use to highlight the workings of the scheduler.

No matter the input and output channels, the scheduler works always the same - a definition of a job is created and the job's underlying service is invoked according to the schedule. It is then up to the service to perform all the actions required in a given integration process.

Python code

Our integration service will read as below:

# -*- coding: utf-8 -*-

# Zato
from zato.common.api import SMTPMessage
from zato.server.service import Service

class SyncData(Service):
    name = 'api.scheduler.sync'

    def handle(self):

        # Which REST outgoing connection to use
        rest_out_name = 'My Data Source'

        # Which SMTP connection to send an email through
        smtp_out_name = 'My SMTP'

        # Who the recipient of the email will be
        smtp_to = 'hello@example.com'

        # Who to put on CC
        smtp_cc = 'hello.cc@example.com'

        # Now, let's get the new data from a remote endpoint ..

        # .. get a REST connection by name ..
        rest_conn = self.out.plain_http[rest_out_name].conn

        # .. download newest data ..
        data = rest_conn.get(self.cid).text

        # .. construct a new e-mail message ..
        message = SMTPMessage()
        message.subject = 'New data'
        message.body = 'Check attached data'

        # .. add recipients ..
        message.to = smtp_to
        message.cc = smtp_cc

        # .. attach the new data to the message ..
        message.attach('my.data.txt', data)

        # .. get an SMTP connection by name ..
        smtp_conn = self.email.smtp[smtp_out_name].conn

        # .. send the e-mail message with newest data ..
        smtp_conn.send(message)

        # .. and now store the data in Redis.
        self.kvdb.conn.set('newest.data', data)

Now, we just need to make it run periodically in background.

Mind the timezone

In the next steps, we will use web-admin to configure new jobs for the scheduler.

Keep it mind that any date and time that you enter in web-admin is always interepreted to be in your web-admin user's timezone and this applies to the scheduler too - by default the timezone is UTC. You can change it by clicking Settings and picking the right timezone to make sure that the scheduled jobs run as expected.

It does not matter what timezone your Zato servers are in - they may be in different ones than the user that is configuring the jobs.

User settings

Endpoint definitions

First, let's use web-admin to define the endpoints that the service uses. Note that Redis does not need an explicit declaration because it is always available under "self.kvdb" in each service.

  • Configuring outgoing REST APIs
Outgoing REST connections menu
Outgoing REST connections form
  • Configuring SMTP e-mail
Outgoing SMTP e-mail connections menu
Outgoing SMTP e-mail connections form

Now, we can move on to the actual scheduler jobs.

Three types of jobs

To cover different integration needs, three types of jobs are available:

  • One-time - fires once only at a specific date and time and then never runs again
  • Interval-based - for periodic processes, can use any combination of weeks, days, hours, minutes and seconds for the interval
  • Cron-style - similar to interval-based but uses the syntax of Cron for its configuration
Creating a new scheduler job

One-time

Select one-time if the job should not be repeated after it runs once.

Creating a new one-time scheduler job

Interval-based

Select interval-based if the job should be repeated periodically. Note that such a job will by default run indefinitely but you can also specify after how many times it should stop, letting you to express concepts such as "Execute once per hour but for the next seven days".

Creating a new interval-based scheduler job

Cron-style

Select cron-style if you are already familiar with the syntax of Cron or if you have some Cron tasks that you would like to migrate to Zato.

Creating a new Cron-style scheduler job

Running jobs manually

At times, it is convenient to run a job on demand, no matter what its schedule is and regardless of what type a particular job is. Web-admin lets you always execute a job directly. Simply find the job in the listing, click "Execute" and it will run immediately.

Extra context

It is very often useful to provide additional context data to a service that the scheduler runs - to achieve it, simply enter any arbitrary value in the "Extra" field when creating or an editing a job in web-admin.

Afterwards, that information will be available as self.request.raw_request in the service's handle method.

Reusability

There is nothing else required - all is done and the service will run in accordance with a job's schedule.

Yet, before concluding, observe that our integration service is completely reusable - there is nothing scheduler-specific in it despite the fact that we currently run it from the scheduler.

We could now invoke the service from command line. Or we could mount it on a REST, AMQP, WebSocket or trigger it from any other channel - exactly the same Python code will run in exactly the same fashion, without any new programming effort needed.



from Planet Python
via read more

Codementor: How Fighting Programming Anxiety Made Me a Better Code: 5 Tips to Follow For Dealing With Coding Stress

As a beginner programmer, you might often hear how fun coding is from your peers. When I talk to senior developers, they share a lot of reasons for why they love what they do so much — the field challenges you constantly, it’s highly stimulating, and the thrill of building new things from scratch and seeing your code is hard to compare to anything else.

from Planet Python
via read more

Python Insider: Releasing pip 20.3, featuring new dependency resolver

On behalf of the Python Packaging Authority and the pip team, I am pleased to announce that we have just released pip 20.3, a new version of pip. You can install it by running python -m pip install --upgrade pip.

This is an important and disruptive release -- we explained why in a blog post last year. We've even made a video about it.

Highlights

  • DISRUPTION: Switch to the new dependency resolver by default. Watch out for changes in handling editable installs, constraints files, and more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020

  • DEPRECATION: Deprecate support for Python 3.5 (to be removed in pip 21.0).

  • DEPRECATION: pip freeze will stop filtering the pip, setuptools, distribute and wheel packages from pip freeze output in a future version. To keep the previous behavior, users should use the new --exclude option.

  • Substantial improvements in new resolver for performance, output and error messages, avoiding infinite loops, and support for constraints files.

  • Support for PEP 600: Future manylinux Platform Tags for Portable Linux Built Distributions.

  • Documentation improvements: Resolver migration guide, quickstart guide, and new documentation theme.

  • Add support for MacOS Big Sur compatibility tags.

The new resolver is now on by default. It is significantly stricter and more consistent when it receives incompatible instructions, and reduces support for certain kinds of constraints files, so some workarounds and workflows may break. Please see our guide on how to test and migrate, and how to report issues. You can use the deprecated (old) resolver, using the flag --use-deprecated=legacy-resolver, until we remove it in the pip 21.0 release in January 2021.

You can find more details (including deprecations and removals) in the changelog.

Coming soon: end of Python 2.7 support

We aim to release pip 21.0 in January 2021, per our release cadence. At that time, pip will stop supporting Python 2.7 and will therefore stop supporting Python 2 entirely.

For more info or to contribute:

We run this project as transparently as possible, so you can:

Thank you

Thanks to our contractors on this project: Simply Secure (specifically Georgia Bullen, Bernard Tyers, Nicole Harris, Ngọc Triệu, and Karissa McKelvey), Changeset Consulting (Sumana Harihareswara), Atos (Paul F. Moore), Tzu-ping Chung, Pradyun Gedam, and Ilan Schnell. Thanks also to Ernest W. Durbin III at the Python Software Foundation for liaising with the project.
 
This award continues our relationship with Mozilla, which supported Python packaging tools with a Mozilla Open Source Support Award in 2017 for Warehouse. Thank you, Mozilla! (MOSS has a number of types of awards, which are open to different sorts of open source/free software projects. If your project will seek financial support in 2021, do check the MOSS website to see if you qualify.)

This is new funding from the Chan Zuckerberg Initiative. This project is being made possible in part by a grant from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation. Thank you, CZI! (If your free software/open source project is seeking funding and is used by researchers, check the Joint Roadmap for Open Science Tools Rapid Response Fund and consider applying.)
 
The funding for pip's overhaul will end at the end of 2020; if your organization wants to help continue improvements in Python packaging, please join the sponsorship program.

As with all pip releases, a significant amount of the work was contributed by pip's user community. Huge thanks to all who have contributed, whether through code, documentation, issue reports and/or discussion. Your help keeps pip improving, and is hugely appreciated. Thank you to the pip and PyPA maintainers, to the PSF and the Packaging WG, and to all the contributors and volunteers who work on or use Python packaging tools.
 
-Sumana Harihareswara, pip project manager


from Planet Python
via read more

Python Software Foundation: Releasing pip 20.3, featuring new dependency resolver

On behalf of the Python Packaging Authority and the pip team, I am pleased to announce that we have just released pip 20.3, a new version of pip. You can install it by running python -m pip install --upgrade pip.

This is an important and disruptive release -- we explained why in a blog post last year. We've even made a video about it.

Highlights

  • DISRUPTION: Switch to the new dependency resolver by default. Watch out for changes in handling editable installs, constraints files, and more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020

  • DEPRECATION: Deprecate support for Python 3.5 (to be removed in pip 21.0).

  • DEPRECATION: pip freeze will stop filtering the pip, setuptools, distribute and wheel packages from pip freeze output in a future version. To keep the previous behavior, users should use the new --exclude option.

  • Substantial improvements in new resolver for performance, output and error messages, avoiding infinite loops, and support for constraints files.

  • Support for PEP 600: Future manylinux Platform Tags for Portable Linux Built Distributions.

  • Documentation improvements: Resolver migration guide, quickstart guide, and new documentation theme.

  • Add support for MacOS Big Sur compatibility tags.

The new resolver is now on by default. It is significantly stricter and more consistent when it receives incompatible instructions, and reduces support for certain kinds of constraints files, so some workarounds and workflows may break. Please see our guide on how to test and migrate, and how to report issues. You can use the deprecated (old) resolver, using the flag --use-deprecated=legacy-resolver, until we remove it in the pip 21.0 release in January 2021.

You can find more details (including deprecations and removals) in the changelog.

Coming soon: end of Python 2.7 support

We aim to release pip 21.0 in January 2021, per our release cadence. At that time, pip will stop supporting Python 2.7 and will therefore stop supporting Python 2 entirely.

For more info or to contribute:

We run this project as transparently as possible, so you can:

Thank you

Thanks to our contractors on this project: Simply Secure (specifically Georgia Bullen, Bernard Tyers, Nicole Harris, Ngọc Triệu, and Karissa McKelvey), Changeset Consulting (Sumana Harihareswara), Atos (Paul F. Moore), Tzu-ping Chung, Pradyun Gedam, and Ilan Schnell. Thanks also to Ernest W. Durbin III at the Python Software Foundation for liaising with the project.
 
This award continues our relationship with Mozilla, which supported Python packaging tools with a Mozilla Open Source Support Award in 2017 for Warehouse. Thank you, Mozilla! (MOSS has a number of types of awards, which are open to different sorts of open source/free software projects. If your project will seek financial support in 2021, do check the MOSS website to see if you qualify.)

This is new funding from the Chan Zuckerberg Initiative. This project is being made possible in part by a grant from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation. Thank you, CZI! (If your free software/open source project is seeking funding and is used by researchers, check the Joint Roadmap for Open Science Tools Rapid Response Fund and consider applying.)
 
The funding for pip's overhaul will end at the end of 2020; if your organization wants to help continue improvements in Python packaging, please join the sponsorship program.

As with all pip releases, a significant amount of the work was contributed by pip's user community. Huge thanks to all who have contributed, whether through code, documentation, issue reports and/or discussion. Your help keeps pip improving, and is hugely appreciated. Thank you to the pip and PyPA maintainers, to the PSF and the Packaging WG, and to all the contributors and volunteers who work on or use Python packaging tools.
 
-Sumana Harihareswara, pip project manager


from Planet Python
via read more

TestDriven.io: Working with Static and Media Files in Django

This article looks at how to work with static and media files in a Django project, locally and in production. from Planet Python via read...