Saturday, November 30, 2019

Tryton News: Newsletter December 2019

@ced wrote:

To end the year, here are some changes that focus on simplifying the usage for both users and developers.
During your holidays, you can help translate Tryton or make a donation via our new service provider.

Contents:

Changes For The User

When the shipment tolerance is exceeded, in the error message we now show the quantities involved so that the user understands the reason for the error and can then adjust them as required.

The asset depreciation per year now uses a fixed year of 365 days. This prevents odd calculations when leap years are involved.

By default the web client loads 20 records. If the user wants to see more records, they had to click on a button. Now these records are loaded automatically when the user scrolls to the bottom of the list. (For this to work the browser needs to support the IntersectionObserver)

If you cancel a move that groups multiple lines, Tryton will warn you. Then ungroup the lines before canceling the move.

When importing OFX statements, if the payee can not be found on the system we add the payee content to the description. This then allows the user to perform a manual search for the correct party.

We reworked the record name of most of the lines. They also now contain the quantity in addition to the product, the order name, etc.

We improved the experience with product attributes. The selection view has been simplified, and when creating a new attribute, the name of the key is automatically deduced from the “string” (label).

Changes For The Developer

We have removed the implicit list of fields in ModelStorage.search_read. So if you were using this behavior, you must now update your calls to explicitly request the fields otherwise you will only get the ids.
As a by-product of this, we now have a dedicated method that fetches data for actions and is cached by the client for 1 day.

We have dropped support for the skiptest attribute for XML <data/>. It was no longer used by any of the standard modules and we think it is always better that the tests actually test the loading of all the data.

When adding instances to a One2Many field, the system automatically sets the reverse Many2One for each instance. This helps when writing code that needs to work with saved and unsaved records.

It is now possible to define wizard transitions that do not required a valid form to click on. For example, the “Skip” button on the reconcile wizard does not need to have a valid form.

We re-factored the code in the group line wizard. It can now be called from code without the need to instantiate the wizard. This is useful when automating workflows that require grouping lines.

We noticed that calling the getter methods on the trytond.config has a non negligible cost, especially in core methods that are called very often. So we changed some of these into global variables.

Every record/wizard/report has a __url__ attribute which returns the tryton:// URL that the desktop client can open, but we were missing this for the web client. We now also have a __href__ attribute.
In addition to this change, we also added some helpers in trytond.url like is_secure, host and http_host. They make it easier to compose a URL that points to Tryton’s routes.

We added a slugify tool as an helper to convert arbitrary strings to a “normalized” keyword.

Posts: 1

Participants: 1

Read full topic



from Planet Python
via read more

John Cook: Data Science and Star Science

I recently got a review copy of Statistics, Data Mining, and Machine Learning in Astronomy. I’m sure the book is especially useful to astronomers, but those of us who are not astronomers use it as a survey of data analysis techniques, especially using Python tools, where all the examples happen to come from astronomy. It covers a lot of ground and is pleasant to read.



from Planet Python
via read more

Mike C. Fletcher: PyOpenGL 3.1.4 is Out

So I just went ahead and pulled the trigger on getting PyOpenGL and PyOpenGL Accelerate 3.1.4 out the door. Really, there is little that has changed in PyOpenGL, save that I'm actually doing a final (non alpha/beta/rc) release. The last final release having been about 5.5 years ago if PyPI history is to be believed(!)

Big things of note:

  • Development has moved to github
  • I'm in the process of moving the website to github pages (from sourceforge)
  • Python 3.x seems to be working, and we've got Appveyor .whl builds for Python 2.7, 3.6, 3.7 and 3.8, 32 and 64 bit
  • Appveyor is now running the test-suite on Windows, this doesn't test much, as it's a very old OpenGL, but it does check that there's basic operation on the platform
  • The end result of that should be that new releases can be done without me needing to boot a windows environment, something that has made doing final/formal releases a PITA

Enjoy yourselves!



from Planet Python
via read more

Weekly Python StackOverflow Report: (ccv) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2019-11-30 17:26:36 GMT


  1. Unstack and return value counts for each variable? - [11/5]
  2. Pandas: How to create a column that indicates when a value is present in another column a set number of rows in advance? - [7/2]
  3. How to generate all possible combinations with a given condition to make it more efficient? - [6/2]
  4. How do I print values only when they appear more than once in a list in python - [6/2]
  5. Reverse cumsum for countdown functionality in pandas? - [6/1]
  6. How to convert multiple columns to single column? - [5/3]
  7. Checking the type of relationship between columns in python/pandas? (one-to-one, one-to-many, or many-to-many) - [5/3]
  8. Mean Square Displacement as a Function of Time in Python - [5/2]
  9. Map dict lookup - [5/2]
  10. How to hide row of a multiple column based on hided data values - [5/1]


from Planet Python
via read more

Test and Code: 95: Data Science Pipeline Testing with Great Expectations - Abe Gong

Data science and machine learning are affecting more of our lives every day. Decisions based on data science and machine learning are heavily dependent on the quality of the data, and the quality of the data pipeline.

Some of the software in the pipeline can be tested to some extent with traditional testing tools, like pytest.

But what about the data? The data entering the pipeline, and at various stages along the pipeline, should be validated.

That's where pipeline tests come in.

Pipeline tests are applied to data. Pipeline tests help you guard against upstream data changes and monitor data quality.

Abe Gong and Superconductive are building an open source project called Great Expectations. It's a tool to help you build pipeline tests.

This is quite an interesting idea, and I hope it gains traction and takes off.

Special Guest: Abe Gong.

Sponsored By:

Support Test & Code: Python Software Testing & Engineering

Links:

<p>Data science and machine learning are affecting more of our lives every day. Decisions based on data science and machine learning are heavily dependent on the quality of the data, and the quality of the data pipeline.</p> <p>Some of the software in the pipeline can be tested to some extent with traditional testing tools, like pytest.</p> <p>But what about the data? The data entering the pipeline, and at various stages along the pipeline, should be validated.</p> <p>That&#39;s where pipeline tests come in.</p> <p>Pipeline tests are applied to data. Pipeline tests help you guard against upstream data changes and monitor data quality.</p> <p>Abe Gong and Superconductive are building an open source project called Great Expectations. It&#39;s a tool to help you build pipeline tests.</p> <p>This is quite an interesting idea, and I hope it gains traction and takes off.</p><p>Special Guest: Abe Gong.</p><p>Sponsored By:</p><ul><li><a href="https://ift.tt/34ZzBsU" rel="nofollow">Raygun</a>: <a href="https://ift.tt/34ZzBsU" rel="nofollow">Detect, diagnose, and destroy Python errors that are affecting your customers. With smart Python error monitoring software from Raygun.com, you can be alerted to issues affecting your users the second they happen.</a></li></ul><p><a href="https://ift.tt/2tzXV5e" rel="payment">Support Test & Code: Python Software Testing & Engineering</a></p><p>Links:</p><ul><li><a href="https://ift.tt/35ThzIf" title="Great Expectations" rel="nofollow">Great Expectations</a></li></ul>

from Planet Python
via read more

Janusworx: #100DaysOfCode, Day 010 – Quick and Dirty Web Page Download

Decided to take a break from the course, and do something for me.
I want to check a site and download new content if any.

The day went sideways though.
Did not quite do what I wanted.
Watched a video on how to setup Visual Studio Code just the way I wanted.
So not quite all wasted.




from Planet Python
via read more

Friday, November 29, 2019

Trey Hunner: Black Friday Sale: Gift Python Morsels to a Friend

From today until the end of Monday December 2nd, I’m selling bundles of two 52-week Python Morsels redemption codes.

You can buy 12 months of Python Morsels for yourself and gift 12 months of Python Morsels to a friend for free!

Or, if you’re extra generous, you can buy two redemption codes (for the price of one) and gift them both to two friends.

What is Python Morsels?🐍🍪

Python Morsels is a weekly Python skill-building service for professional Python developers. Subscribers receive one Python exercise every week in the Python skill level of their choosing (novice, intermediate, advanced).

Each exercise is designed to help you think the way Python thinks, so you can write your code less like a C/Java/Perl developer would and more like a fluent Pythonista would. Each programming language has its own unique ways of looking at the world: Python Morsels will help you embrace Python’s.

One year’s worth of Python Morsels will help even experienced Python developers deepen their Python skills and find new insights about Python to incorporate into their day-to-day work.

How does this work? 🤔

Normally a 12 month Python Morsels subscription costs $200. For $200, I’m instead selling two redemption codes, each of which can be used for 12 months (52 weeks) of Python Morsels exercises.

With this sale, you’ll get two 12-month redemption codes for the price of one. So you’ll get 1 year of Python Morsels for 2 friends for just $200.

These codes can be used at any time and users of these codes will always maintain access to the 52 exercises received over the 12 month period. You can use one of these codes to extend your current subscription, but new users can also use this redemption code without signing up for an ongoing subscription.

Only one of these codes can be used per account (though you can purchase as many as you’d like to gift to others).

What will I (and my friends) get with Python Morsels? 🎁

With Python Morsels you’ll get:

  • An email every Monday which includes a detailed problem to solve using Python
  • Multiple bonuses for almost every problem (most have 3 bonuses, almost all have 2) so you can re-adjust your difficulty level on a weekly basis
  • Hints for each problem which you can use when you get stuck
  • An online progress tracking tool to keep track of which exercises you’ve solved and how many bonuses you solved for each exercise
  • Automated tests (to ensure correctness) which you can run locally and which also run automatically when you submit your solutions
  • An email every Wednesday with a detailed walkthrough of various solutions (usually 5-10) for each problem, including walkthroughs of each bonus and a discussion of why some solutions may be better than others
  • A skill level selection tool (novice, intermediate, advanced) which you can adjust based on your Python experience
  • A web interface you can come back to even after your 12 months are over

Okay, I’m interested. Now what? ✨

First of all, don’t wait. This buy-one-get-one-free sale ends Monday!

You can sign up and purchase 2 redemption codes by visiting http://trey.io/sale2019

Note that you need to create a Python Morsels account to purchase the redemption codes. You don’t need to have an on-going subscription, you just need an account.

If you have any questions about this sale, please don’t hesitate to email me.

Go get your Python Morsels redemption codes



from Planet Python
via read more

Catalin George Festila: Python 3.7.5 : Script install and import python packages.

This script will try to import Python packages from a list. If these packages are not installed then will be installed on system. import sys import subprocess if __name__ == '__main__': def ModuleInstall(package_name): try: subprocess.check_call(['python3', '-m', 'pip3', 'install', package_name, "--user"]) except: subprocess.check_call(['python', '-m'

from Planet Python
via read more

Tk Assistant V1.w

Tk Assistant V1.w is a beginner friendly way of getting familiar with using Tkinter. It currently contains 40 stand-alone Python source code files, of mostly widgets. This is the Windows only version, Linux soon.

from Python Coder
via read more

Stack Abuse: Unit Testing in Python with Unittest

Introduction

In almost all fields, products are thoroughly tested before being released to the market to ensure its quality and that it works as intended.

Medicine, cosmetic products, vehicles, phones, laptops are all tested to ensure that they uphold a certain level of quality that was promised to the consumer. Given the influence and reach of software in our daily lives, it is important that we test our software thoroughly before releasing it to our users to avoid issues coming up when it is in use.

There are various ways and methods of testing our software, and in this article we will concentrate on testing our Python programs using the Unittest framework.

Unit Testing vs Other Forms of Testing

There are various ways to test software which are majorly grouped into functional and non-functional testing.

  • Non-functional testing: Meant to verify and check the non-functional aspects of the software such as reliability, security, availability, and scalability. Examples of non-functional testing include load testing and stress testing.
  • Functional testing: Involves testing our software against the functional requirements to ensure that it delivers the functionality required. For example, we can test if our shopping platform sends emails to users after placing their orders by simulating that scenario and checking for the email.

Unit testing falls under functional testing alongside integration testing and regression testing.

Unit testing refers to a method of testing where software is broken down into different components (units) and each unit is tested functionally and in isolation from the other units or modules.

A unit here refers to the smallest part of a system that achieves a single function and is testable. The goal of unit testing is to verify that each component of a system performs as expected which in turn confirms that the entire system meets and delivers the functional requirements.

Unit testing is generally performed before integration testing since, in order to verify that parts of a system work well together, we have to first verify that they work as expected individually first. It is also generally carried out by the developers building the individual components during the development process.

Benefits of Unit Testing

Unit testing is beneficial in that it fixes bugs and issues early in the development process and eventually speeds it up.

The cost of fixing bugs identified during unit testing is also low as compared to fixing them during integration testing or while in production.

Unit tests also serve as documentation of the project by defining what each part of the system does through well written and documented tests. When refactoring a system or adding features, unit tests help guard against changes that break the existing functionality.

Unittest Framework

Inspired by the JUnit testing framework for Java, unittest is a testing framework for Python programs that comes bundled with the Python distribution since Python 2.1. It is sometimes referred to as PyUnit. The framework supports the automation and aggregation of tests and common setup and shutdown code for them.

It achieves this and more through the following concepts:

  • Test Fixture: Defines the preparation required to the execution of the tests and any actions that need to be done after the conclusion of a test. Fixtures can include database setup and connection, creation of temporary files or directories, and the subsequent cleanup or deletion of the files after the test has been completed.
  • Test Case: Refers to the individual test that checks for a specific response in a given scenario with specific inputs.
  • Test Suite: Represents an aggregation of test cases that are related and should be executed together.
  • Test Runner: Coordinates the execution of the tests and provides the results of the testing process to the user through a graphical user interface, the terminal or a report written to a file.

unittest is not the only testing framework for Python out there, others include Pytest, Robot Framework, Lettuce for BDD, and Behave Framework.

If you're interested in reading more about Test-Driven Development in Python with PyTest, we've got you covered!

Unittest Framework in Action

We are going to explore the unittest framework by building a simple calculator application and writing the tests to verify that it works as expected. We will use the Test-Driven Development process by starting with the tests then implementing the functionality to make the tests pass.

Even though it is a good practice to develop our Python application in a virtual environment, for this example it will not be mandatory since unittest ships with the Python distribution and we will not need any other external packages to build our calculator.

Our calculator will perform simple addition, subtraction, multiplication, and division operations between two integers. These requirements will guide our functional tests using the unittest framework.

We will test the four operations supported by our calculator separately and write the tests for each in a separate test suite since the tests of a particular operation are expected to be executed together. Our test suites will be housed in one file and our calculator in a separate file.

Our calculator will be a SimpleCalculator class with functions to handle the four operations expected of it. Let us begin testing by writing the tests for the addition operation in our test_simple_calculator.py:

import unittest
from simple_calculator import SimpleCalculator

class AdditionTestSuite(unittest.TestCase):
    def setUp(self):
        """ Executed before every test case """
        self.calculator = SimpleCalculator()

    def tearDown(self):
        """ Executed after every test case """
        print("\ntearDown executing after the test case. Result:")

    def test_addition_two_integers(self):
        result = self.calculator.sum(5, 6)
        self.assertEqual(result, 11)

    def test_addition_integer_string(self):
        result = self.calculator.sum(5, "6")
        self.assertEqual(result, "ERROR")

    def test_addition_negative_integers(self):
        result = self.calculator.sum(-5, -6)
        self.assertEqual(result, -11)
        self.assertNotEqual(result, 11)

# Execute all the tests when the file is executed
if __name__ == "__main__":
    unittest.main()

We start by importing the unittest module and creating a test suite(AdditionTestSuite) for the addition operation.

In it, we create a setUp() method that is called before every test case to create our SimpleCalculator object that will be used to perform the calculations.

The tearDown() method is executed after every test case and since we do not have much use for it at the moment, we will just use it to print out the results of each test.

The functions test_addition_two_integers(), test_addition_integer_string() and test_addition_negative_integers() are our test cases. The calculator is expected to add two positive or negative integers and return the sum. When presented with an integer and a string, our calculator is supposed to return an error.

The assertEqual() and assertNotEqual() are functions that are used to validate the output of our calculator. The assertEqual() function checks whether the two values provided are equal, in our case, we expect the sum of 5 and 6 to be 11, so we will compare this to the value returned by our calculator.

If the two values are equal, the test has passed. Other assertion functions offered by unittest include:

  • assertTrue(a): Checks whether the expression provided is true
  • assertGreater(a, b): Checks whether a is greater than b
  • assertNotIn(a, b): Checks whether a is in b
  • assertLessEqual(a, b): Checks whether a is less or equal to b
  • etc...

A list of these assertions can be found in this cheat sheet.

When we execute the test file, this is the output:

$ python3 test_simple_calulator.py

tearDown executing after the test case. Result:
E
tearDown executing after the test case. Result:
E
tearDown executing after the test case. Result:
E
======================================================================
ERROR: test_addition_integer_string (__main__.AdditionTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_simple_calulator.py", line 22, in test_addition_integer_string
    result = self.calculator.sum(5, "6")
AttributeError: 'SimpleCalculator' object has no attribute 'sum'

======================================================================
ERROR: test_addition_negative_integers (__main__.AdditionTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_simple_calulator.py", line 26, in test_addition_negative_integers
    result = self.calculator.sum(-5, -6)
AttributeError: 'SimpleCalculator' object has no attribute 'sum'

======================================================================
ERROR: test_addition_two_integers (__main__.AdditionTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_simple_calulator.py", line 18, in test_addition_two_integers
    result = self.calculator.sum(5, 6)
AttributeError: 'SimpleCalculator' object has no attribute 'sum'

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (errors=3)

At the top of the output, we can see the execution of the tearDown() function through the printing of the message we specified. This is followed by the letter E and error messages arising from the execution of our tests.

There are three possible outcomes of a test, it can pass, fail, or encounter an error. The unittest framework indicates the three scenarios by using:

  • A full-stop (.): Indicates a passing test
  • The letter ‘F’: Indicates a failing test
  • The letter ‘E’: Indicates an error occured during the execution of the test

In our case, we are seeing the letter E, meaning that our tests encountered errors that occurred when executing our tests. We are receiving errors because we have not yet implemented the addition functionality of our calculator:

class SimpleCalculator:
    def sum(self, a, b):
        """ Function to add two integers """
        return a + b

Our calculator is now ready to add two numbers, but to be sure it will perform as expected, let us remove the tearDown() function from our tests and run our tests once again:

$ python3 test_simple_calulator.py
E..
======================================================================
ERROR: test_addition_integer_string (__main__.AdditionTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_simple_calulator.py", line 22, in test_addition_integer_string
    result = self.calculator.sum(5, "6")
  File "/Users/robley/Desktop/code/python/unittest_demo/src/simple_calculator.py", line 7, in sum
    return a + b
TypeError: unsupported operand type(s) for +: 'int' and 'str'

----------------------------------------------------------------------
Ran 3 tests in 0.002s

FAILED (errors=1)

Our errors have reduced from 3 to just once 1. The report summary on the first line E.. indicates that one test resulted in an error and could not complete execution, and the remaining two passed. To make the first test pass, we have to refactor our sum function as follows:

    def sum(self, a, b):
        if isinstance(a, int) and isinstance(b, int):
            return a + b

When we run our tests one more time:

$ python3 test_simple_calulator.py
F..
======================================================================
FAIL: test_addition_integer_string (__main__.AdditionTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_simple_calulator.py", line 23, in test_addition_integer_string
    self.assertEqual(result, "ERROR")
AssertionError: None != 'ERROR'

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=1)

This time, our sum function executes to completion but our test fails. This is because we did not return any value when one of the inputs is not an integer. Our assertion compares None to ERROR and since they are not equal, the test fails. To make our test pass we have to return the error in our sum() function:

def sum(self, a, b):
    if isinstance(a, int) and isinstance(b, int):
        return a + b
    else:
        return "ERROR"

And when we run our tests:

$ python3 test_simple_calulator.py
...
----------------------------------------------------------------------
Ran 3 tests in 0.000s

OK

All our tests pass now and we get 3 full-stops to indicate all our 3 tests for the addition functionality are passing. The subtraction, multiplication, and division test suites are also implemented in a similar fashion.

We can also test if an exception is raised. For instance, when a number is divided by zero, the ZeroDivisionError exception is raised. In our DivisionTestSuite, we can confirm whether the exception was raised:

class DivisionTestSuite(unittest.TestCase):
    def setUp(self):
        """ Executed before every test case """
        self.calculator = SimpleCalculator()

    def test_divide_by_zero_exception(self):
        with self.assertRaises(ZeroDivisionError):
            self.calculator.divide(10, 0)

The test_divide_by_zero_exception() will execute the divide(10, 0) function of our calculator and confirm that the exception was indeed raised. We can execute the DivisionTestSuite in isolation, as follows:

$ python3 -m unittest test_simple_calulator.DivisionTestSuite.test_divide_by_zero_exception
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

The full division functionality test suite can found in the gist linked below alongside the test suites for the multiplication and subtraction functionality.

Conclusion

In this article, we have explored the unittest framework and identified the situations where it is used when developing Python programs. The unittest framework, also known as PyUnit, comes with the Python distribution by default as opposed to other testing frameworks. In a TDD-manner, we wrote the tests for a simple calculator, executed the tests and then implemented the functionality to make the tests pass.

The unittest framework provided the functionality to create and group test cases and check the output of our calculator against the expected output to verify that it's working as expected.

The full calculator and test suites can be found here in this gist on GitHub.



from Planet Python
via read more

Janusworx: #100DaysOfCode, Day 009 – The Collections Module

I cheated and peeked again at the solution :)
After five days, I think I needed help.
But it was still a very good day.
I learned lots.

When I started this little project, I saw videos about defaultdicts and namedtuples and then kinda forgot that they would be of some use to me in my project itself.
That realisation came yesterday.
Like they say, it happened very slowly and then all at once! I wrote up a quick workflow of how the program was supposed to work on paper.
And then I had a decisison to make.
Do I peek at the answer? or not?
In the end, I did.
I wanted confirmation of my thought process, and realised that if I was going to figure out the code itself, this would take much, much longer.
Besides, writing Python will come to me if I stick with this as I have been doing, so no guilt about copying code.

The instructors did solve the problem, exactly the way I envisioned it in my head :)
And the code, to my inexperienced fingers was tricky. (I don’t know lambdas or expressions in general and the instructor uses them liberally; a dictionary expression to populate a dict and a lambda to sort a list)
However I take small comfort in the fact, that I did, write one third of the code all by myself.
Just goes to show, how little fluency I have with the language.

But still! I am happy I got my thinking straight :)
Onwards!



from Planet Python
via read more

Variable Explorer improvements in Spyder 4

Spyder 4 will be released very soon with lots of interesting new features that you'll want to check out, reflecting years of effort by the team to improve the user experience. In this post, we will be talking about the improvements made to the Variable Explorer.

These include the brand new Object Explorer for inspecting arbitrary Python variables, full support for MultiIndex dataframes with multiple dimensions, and the ability to filter and search for variables by name and type, and much more.

It is important to mention that several of the above improvements were made possible through integrating the work of two other projects. Code from gtabview was used to implement the multi-dimensional Pandas indexes, while objbrowser was the foundation of the new Object Explorer.

Read more… (7 min remaining to read)



from Planet SciPy
read more

Thursday, November 28, 2019

Quansight Labs Blog: Variable Explorer improvements in Spyder 4

Spyder 4 will be released very soon with lots of interesting new features that you'll want to check out, reflecting years of effort by the team to improve the user experience. In this post, we will be talking about the improvements made to the Variable Explorer.

These include the brand new Object Explorer for inspecting arbitrary Python variables, full support for MultiIndex dataframes with multiple dimensions, and the ability to filter and search for variables by name and type, and much more.

It is important to mention that several of the above improvements were made possible through integrating the work of two other projects. Code from gtabview was used to implement the multi-dimensional Pandas indexes, while objbrowser was the foundation of the new Object Explorer.

Read more… (7 min remaining to read)



from Planet Python
via read more

Codementor: teach your kids to build their own game with Python - 2

a series of tutorials that teaches kids/beginners how to develop the famous Space Invaders game with Python.

from Planet Python
via read more

Astropy Receives $900k Grant from Moore Foundation

The post Astropy Receives $900k Grant from Moore Foundation appeared first on NumFOCUS.



from Planet SciPy
read more

mlpack Machine Learning Library joins NumFOCUS Sponsored Projects

The post mlpack Machine Learning Library joins NumFOCUS Sponsored Projects appeared first on NumFOCUS.



from Planet SciPy
read more

Reuven Lerner: My Black Friday sale is live! Take 50% off any course in Python or data science

As promised, the Black Friday sale has begun in my online store. Through Monday, my courses and books are all 50% off with the coupon code BF2019.

This includes all eight of the video courses:

It also includes all six cohorts of Weekly Python Exercise that will start in 2020!  Pay only $50 (rather than $100) per cohort with the coupon code BF2019:

People have had very kind things to say about my courses.  For example:

  • “The exercises are perfect for me because they are right in my “wheelhouse”. I have enough background knowledge that the context of the problems is relevant in my experience, yet I can’t just rattle off the solutions instantly. I have to puzzle over them as I try to solve them. I do usually achieve my goal of coming up with a solution that I am pleased with prior to the answer coming out on the following Monday.”  — Doug (about WPE)
  • “I was a total python noob when I started.  I just wanted to learn the syntax, how to look at problems and find the solution. You provided both.  Of course I did a lot of reading too but your teaching is instrumental in drilling some concepts into our brains.” — Jean-Pierre (about WPE)
  • “It was an amazing course. Apart from comprehensions, you have provided lots of information about Python programming. The exercises were really challenging.” — Jonayed (about “Comprehending Comprehensions”)
  • “I really liked the way you went slow and explained everything in microscopic detail, acknowledging where the NumPy syntax is non-intuitive.”  — David (about “NumPy”)

Again, you can take advantage of this discount?  Just use the coupon code BF2019 at checkout.

But be sure to do it in the coming days — because as of Tuesday, this year’s Black Friday sale will be completely over.

The post My Black Friday sale is live! Take 50% off any course in Python or data science appeared first on Reuven Lerner.



from Planet Python
via read more

Codementor: How I learned Python

About me Hi, I'm Kai and I am currently between my Bachelor's and my Master's Degree in Computer Engineering / Science. I want to help people to develop their skills in python. Why I wanted to...

from Planet Python
via read more

Wingware Blog: Navigating Python Code with Wing Pro 7 (part 3 of 3)

Last week and the week before, we looked at some of the code navigation features in Wing, including goto-definition, find uses, and project-wide search, code index menus, and the Source Browser.

This week we'll finish up this mini-series by looking at how to quickly and easily find and open files or visit symbols in Python code by typing a name fragment.

Project Configuration

The features described here assume that you have used Add Existing Directory in the Project menu to add your source code to your project. Typically the project should contain the code you are actively working on. Packages that your code uses can be left out of the project, unless you anticipate often wanting to open or search files in them. Wing will still be able to find them through the Python Path, as needed for auto-completion, code warnings, and other purposes.

Open From Project

Open from Project from the File menu is typically the easiest way to navigate to a file by name. This displays a dialog that lists the project files whose names match a fragment:

/images/blog/code-navigation/open-from-project.png

Fragments can be abbreviations of the file name and may match enclosing directory names if they contain / or \. The arrow keys navigate the list and pressing Enter opens the selected file.

Find Symbol

A similar interface is available to find Python code symbols by name. For the current file, this is Find Symbol in the Source menu. For all project files, use Find Symbol in Project instead:

/images/blog/code-navigation/find-symbol-in-project.png

That's it for now! We'll be back soon with more Wing Tips for Wing Python IDE.

As always, please don't hesitate to email support@wingware.com if you run into problems or have any questions.



from Planet Python
via read more

Wednesday, November 27, 2019

Python Circle: Improve Your Python Practices: Debugging, Testing, and Maintenance

improving your python skills, debugging, testing and practice, pypi

from Planet Python
via read more

Janusworx: #100DaysOfCode, Day 008 – The Collections Module

Finally feels like something is happening.
Did two hours today.

I don’t know if what I do is cheating, but I darn near print everything to see output and then iterate on the errors.

I understood how to work with csv files and process them and why ordered dictionaries can be useful.
I used that to process my csv file and read and print select fields.

Will work on sorting them somehow and figure out frequency based on ratings tomorrow.

Pleased with myself. Today was a good day!




from Planet Python
via read more

Talk Python to Me: #240 A guided tour of the CPython source code

You might use Python every day. But how much do you know about what happens under the covers, down at the C level? When you type something like variable = [], what are the byte-codes that accomplish this? How about the class backing the list itself?

from Planet Python
via read more

Python Bytes: #158 There's a bounty on your open-source bugs!



from Planet Python
via read more

Python Anywhere: Python 3.8 now available!

If you signed up since 26 November, you'll have Python 3.8 available on your account -- you can use it just like any other Python version.

If you signed up before then, it's a little more complicated, because adding Python 3.8 to your account requires changing your system image. Each account has an associated system image, which determines which Python versions, Python packages, operating system packages, and so on are available. The new image is called "fishnchips" (after the previous system images, "classic", "dangermouse" and "earlgrey").

What this means is that if we change your system image, the pre-installed Python packages will all get upgraded, which means that any code you have that depends on them might stop working if it's not compatible with the new versions.

Additionally, if you're using virtualenvs, because this update upgrades the point releases of the older Python versions (for example, 3.7.0 gets upgraded to 3.7.5), the update may make your envs stop working -- if so, you'll need to rebuild them.

So, long story short -- we can switch your account over to the new system image, but you may need to rebuild your virtualenvs afterwards if you're using them -- and you may need to update your code to handle newer pre-installed Python packages if you're not using virtualenvs.

There are more details about exactly which package versions are included in which system image on the batteries included page. And if you'd like to switch your account over to fishnchips, just drop us a line using the "Send feedback" button. (If you've read all of the above, and understand that you may have to make code/virtualenv changes, mention that you have in the feedback message as otherwise we'll respond by basically repeating all of the stuff we just said, and asking "are you sure?")



from Planet Python
via read more

Python 3.8 now available!

If you signed up since 26 November, you'll have Python 3.8 available on your account -- you can use it just like any other Python version.

If you signed up before then, it's a little more complicated, because adding Python 3.8 to your account requires changing your system image. Each account has an associated system image, which determines which Python versions, Python packages, operating system packages, and so on are available. The new image is called "fishnchips" (after the previous system images, "classic", "dangermouse" and "earlgrey").

What this means is that if we change your system image, the pre-installed Python packages will all get upgraded, which means that any code you have that depends on them might stop working if it's not compatible with the new versions.

Additionally, if you're using virtualenvs, because this update upgrades the point releases of the older Python versions (for example, 3.7.0 gets upgraded to 3.7.5), the update may make your envs stop working -- if so, you'll need to rebuild them.

So, long story short -- we can switch your account over to the new system image, but you may need to rebuild your virtualenvs afterwards if you're using them -- and you may need to update your code to handle newer pre-installed Python packages if you're not using virtualenvs.

There are more details about exactly which package versions are included in which system image on the batteries included page. And if you'd like to switch your account over to fishnchips, just drop us a line using the "Send feedback" button. (If you've read all of the above, and understand that you may have to make code/virtualenv changes, mention that you have in the feedback message as otherwise we'll respond by basically repeating all of the stuff we just said, and asking "are you sure?")



from PythonAnywhere News
via read more

Real Python: Python Descriptors: An Introduction

Descriptors are a specific Python feature that power a lot of the magic hidden under the language’s hood. If you’ve ever thought that Python descriptors are an advanced topic with few practical applications, then this tutorial is the perfect tool to help you understand this powerful feature. You’ll come to understand why Python descriptors are such an interesting topic, and what kind of use cases you can apply them to.

By the end of this tutorial, you’ll know:

  • What Python descriptors are
  • Where they’re used in Python’s internals
  • How to implement your own descriptors
  • When to use Python descriptors

This tutorial is intended for intermediate to advanced Python developers as it concerns Python internals. However, if you’re not at this level yet, then just keep reading! You’ll find useful information about Python and the lookup chain.

Free Bonus: Click here to get access to a free "The Power of Python Decorators" guide that shows you 3 advanced decorator patterns and techniques you can use to write to cleaner and more Pythonic programs.

What Are Python Descriptors?

Descriptors are Python objects that implement a method of the descriptor protocol, which gives you the ability to create objects that have special behavior when they’re accessed as attributes of other objects. Here you can see the correct definition of the descriptor protocol:

__get__(self, obj, type=None) -> object
__set__(self, obj, value) -> None
__delete__(self, obj) -> None
__set_name__(self, owner, name)

If your descriptor implements just .__get__(), then it’s said to be a non-data descriptor. If it implements .__set__() or .__delete__(), then it’s said to be a data descriptor. Note that this difference is not just about the name, but it’s also a difference in behavior. That’s because data descriptors have precedence during the lookup process, as you’ll see later on.

Take a look at the following example, which defines a descriptor that logs something on the console when it’s accessed:

# descriptors.py
class Verbose_attribute():
    def __get__(self, obj, type=None) -> object:
        print("accessing the attribute to get the value")
        return 42
    def __set__(self, obj, value) -> None:
        print("accessing the attribute to set the value")
        raise AttributeError("Cannot change the value")

class Foo():
    attribute1 = Verbose_attribute()

my_foo_object = Foo()
x = my_foo_object.attribute1
print(x)

In the example above, Verbose_attribute() implements the descriptor protocol. Once it’s instantiated as an attribute of Foo, it can be considered a descriptor.

As a descriptor, it has binding behavior when it’s accessed using dot notation. In this case, the descriptor logs a message on the console every time it’s accessed to get or set a value:

  • When it’s accessed to .__get__() the value, it always returns the value 42.
  • When it’s accessed to .__set__() a specific value, it raises an AttributeError exception, which is the recommended way to implement read-only descriptors.

Now, run the example above and you’ll see the descriptor log the access to the console before returning the constant value:

$ python descriptors.py
accessing the attribute to get the value
42

Here, when you try to access attribute1, the descriptor logs this access to the console, as defined in .__get__().

How Descriptors Work in Python’s Internals

If you have experience as an object-oriented Python developer, then you may think that the previous example’s approach is a bit of overkill. You could achieve the same result by using properties. While this is true, you may be surprised to know that properties in Python are just… descriptors! You’ll see later on that properties are not the only feature that make use of Python descriptors.

Python Descriptors in Properties

If you want to get the same result as the previous example without explicitly using a Python descriptor, then the most straightforward approach is to use a property. The following example uses a property that logs a message to the console when it’s accessed:

# property_decorator.py
class Foo():
    @property
    def attribute1(self) -> object:
        print("accessing the attribute to get the value")
        return 42

    @attribute1.setter
    def attribute1(self, value) -> None:
        print("accessing the attribute to set the value")
        raise AttributeError("Cannot change the value")

my_foo_object = Foo()
x = my_foo_object.attribute1
print(x)

The example above makes use of decorators to define a property, but as you may know, decorators are just syntactic sugar. The example before, in fact, can be written as follows:

# property_function.py
class Foo():
    def getter(self) -> object:
        print("accessing the attribute to get the value")
        return 42

    def setter(self, value) -> None:
        print("accessing the attribute to set the value")
        raise AttributeError("Cannot change the value")

    attribute1 = property(getter, setter)

my_foo_object = Foo()
x = my_foo_object.attribute1
print(x)

Now you can see that the property has been created by using property(). The signature of this function is as follows:

property(fget=None, fset=None, fdel=None, doc=None) -> object

property() returns a property object that implements the descriptor protocol. It uses the parameters fget, fset and fdel for the actual implementation of the three methods of the protocol.

Python Descriptors in Methods and Functions

If you’ve ever written an object-oriented program in Python, then you’ve certainly used methods. These are regular functions that have the first argument reserved for the object instance. When you access a method using dot notation, you’re calling the corresponding function and passing the object instance as the first parameter.

The magic that transforms your obj.method(*args) call into method(obj, *args) is inside a .__get__() implementation of the function object that is, in fact, a non-data descriptor. In particular, the function object implements .__get__() so that it returns a bound method when you access it with dot notation. The (*args) that follow invoke the functions by passing all the extra arguments needed.

To get an idea for how it works, take a look at this pure Python example from the official docs:

class Function(object):
    . . .
    def __get__(self, obj, objtype=None):
        "Simulate func_descr_get() in Objects/funcobject.c"
        if obj is None:
            return self
        return types.MethodType(self, obj)

In the example above, when the function is accessed with dot notation, .__get__() is called and a bound method is returned.

This works for regular instance methods just like it does for class methods or static methods. So, if you call a static method with obj.method(*args), then it’s automatically transformed into method(*args). Similarly, if you call a class method with obj.method(type(obj), *args), then it’s automatically transformed into method(type(obj), *args).

Note: To learn more about *args, check out Python args and kwargs: Demystified.

In the official docs, you can find some examples of how static methods and class methods would be implemented if they were written in pure Python instead of the actual C implementation. For instance, a possible static method implementation could be this:

class StaticMethod(object):
    "Emulate PyStaticMethod_Type() in Objects/funcobject.c"
    def __init__(self, f):
        self.f = f

    def __get__(self, obj, objtype=None):
        return self.f

Likewise, this could be a possible class method implementation:

class ClassMethod(object):
    "Emulate PyClassMethod_Type() in Objects/funcobject.c"
    def __init__(self, f):
        self.f = f

    def __get__(self, obj, klass=None):
        if klass is None:
            klass = type(obj)
        def newfunc(*args):
            return self.f(klass, *args)
        return newfunc

Note that, in Python, a class method is just a static method that takes the class reference as the first argument of the argument list.

How Attributes Are Accessed With the Lookup Chain

To understand a little more about Python descriptors and Python internals, you need to understand what happens in Python when an attribute is accessed. In Python, every object has a built-in __dict__ attribute. This is a dictionary that contains all the attributes defined in the object itself. To see this in action, consider the following example:

class Vehicle():
    can_fly = False
    number_of_weels = 0

class Car(Vehicle):
    number_of_weels = 4

    def __init__(self, color):
        self.color = color

my_car = Car("red")
print(my_car.__dict__)
print(type(my_car).__dict__)

This code creates a new object and prints the contents of the __dict__ attribute for both the object and the class. Now, run the script and analyze the output to see the __dict__ attributes set:

{'color': 'red'}
{'__module__': '__main__', 'number_of_weels': 4, '__init__': <function Car.__init__ at 0x10fdeaea0>, '__doc__': None}

The __dict__ attributes are set as expected. Note that, in Python, everything is an object. A class is actually an object as well, so it will also have a __dict__ attribute that contains all the attributes and methods of the class.

So, what’s going on under the hood when you access an attribute in Python? Let’s make some tests with a modified version of the former example. Consider this code:

# lookup.py
class Vehicle(object):
    can_fly = False
    number_of_weels = 0

class Car(Vehicle):
    number_of_weels = 4

    def __init__(self, color):
        self.color = color

my_car = Car("red")

print(my_car.color)
print(my_car.number_of_weels)
print(my_car.can_fly)

In this example, you create an instance of the Car class that inherits from the Vehicle class. Then, you access some attributes. If you run this example, then you can see that you get all the values you expect:

$ python lookup.py
red
4
False

Here, when you access the attribute color of the instance my_car, you’re actually accessing a single value of the __dict__ attribute of the object my_car. When you access the attribute number_of_wheels of the object my_car, you’re really accessing a single value of the __dict__ attribute of the class Car. Finally, when you access the can_fly attribute, you’re actually accessing it by using the __dict__ attribute of the Vehicle class.

This means that it’s possible to rewrite the above example like this:

# lookup2.py
class Vehicle():
    can_fly = False
    number_of_weels = 0

class Car(Vehicle):
    number_of_weels = 4

    def __init__(self, color):
        self.color = color

my_car = Car("red")

print(my_car.__dict__['color'])
print(type(my_car).__dict__['number_of_weels'])
print(type(my_car).__base__.__dict__['can_fly'])

When you test this new example, you should get the same result:

$ python lookup2.py
red
4
False

So, what happens when you access the attribute of an object with dot notation? How does the interpreter know what you really need? Well, here’s where a concept called the lookup chain comes in:

  • First, you’ll get the result returned from the __get__ method of the data descriptor named after the attribute you’re looking for.

  • If that fails, then you’ll get the value of your object’s __dict__ for the key named after the attribute you’re looking for.

  • If that fails, then you’ll get the result returned from the __get__ method of the non-data descriptor named after the attribute you’re looking for.

  • If that fails, then you’ll get the value of your object type’s __dict__ for the key named after the attribute you’re looking for.

  • If that fails, then you’ll get the value of your object parent type’s __dict__ for the key named after the attribute you’re looking for.

  • If that fails, then the previous step is repeated for all the parent’s types in the method resolution order of your object.

  • If everything else has failed, then you’ll get an AttributeError exception.

Now you can see why it’s important to know if a descriptor is a data descriptor or a non-data descriptor? They’re on different levels of the lookup chain, and you’ll see later on that this difference in behavior can be very convenient.

How to Use Python Descriptors Properly

If you want to use Python descriptors in your code, then you just need to implement the descriptor protocol. The most important methods of this protocol are .__get__() and .__set__(), which have the following signature:

__get__(self, obj, type=None) -> object
__set__(self, obj, value) -> None

When you implement the protocol, keep these things in mind:

  • self is the instance of the descriptor you’re writing.
  • obj is the instance of the object your descriptor is attached to.
  • type is the type of the object the descriptor is attached to.

In .__set__(), you don’t have the type variable, because you can only call .__set__() on the object. In contrast, you can call .__get__() on both the object and the class.

Another important thing to know is that Python descriptors are instantiated just once per class. That means that every single instance of a class containing a descriptor shares that descriptor instance. This is something that you might not expect and can lead to a classic pitfall, like this:

# descriptors2.py
class OneDigitNumericValue():
    def __init__(self):
        self.value = 0
    def __get__(self, obj, type=None) -> object:
        return self.value
    def __set__(self, obj, value) -> None:
        if value > 9 or value < 0 or int(value) != value:
            raise AttributeError("The value is invalid")
        self.value = value

class Foo():
    number = OneDigitNumericValue()

my_foo_object = Foo()
my_second_foo_object = Foo()

my_foo_object.number = 3
print(my_foo_object.number)
print(my_second_foo_object.number)

my_third_foo_object = Foo()
print(my_third_foo_object.number)

Here, you have a class Foo that defines an attribute number, which is a descriptor. This descriptor accepts a single-digit numeric value and stores it in a property of the descriptor itself. However, this approach won’t work, because each instance of Foo shares the same descriptor instance. What you’ve essentially created is just a new class-level attribute.

Try to run the code and examine the output:

$ python descriptors2.py
3
3
3

You can see that all the instances of Foo have the same value for the attribute number, even though the last one was created after the my_foo_object.number attribute was set.

So, how can you solve this problem? You might think that it’d be a good idea to use a dictionary to save all the values of the descriptor for all the objects it’s attached to. This seems to be a good solution since .__get__() and .__set__() have the obj attribute, which is the instance of the object you’re attached to. You could use this value as a key for the dictionary.

Unfortunately, this solution has a big downside, which you can see in the following example:

# descriptors3.py
class OneDigitNumericValue():
    def __init__(self):
        self.value = {}

    def __get__(self, obj, type=None) -> object:
        try:
            return self.value[obj]
        except:
            return 0

    def __set__(self, obj, value) -> None:
        if value > 9 or value < 0 or int(value) != value:
            raise AttributeError("The value is invalid")
        self.value[obj] = value

class Foo():
    number = OneDigitNumericValue()

my_foo_object = Foo()
my_second_foo_object = Foo()

my_foo_object.number = 3
print(my_foo_object.number)
print(my_second_foo_object.number)

my_third_foo_object = Foo()
print(my_third_foo_object.number)

In this example, you use a dictionary for storing the value of the number attribute for all your objects inside your descriptor. When you run this code, you’ll see that it runs fine and that the behavior is as expected:

$ python descriptors3.py
3
0
0

Unfortunately, the downside here is that the descriptor is keeping a strong reference to the owner object. This means that if you destroy the object, then the memory is not released because the garbage collector keeps finding a reference to that object inside the descriptor!

You may think that the solution here could be the use of weak references. While that may, you’d have to deal with the fact that not everything can be referenced as weak and that, when your objects get collected, they disappear from your dictionary.

The best solution here is to simply not store values in the descriptor itself, but to store them in the object that the descriptor is attached to. Try this approach next:

# descriptors4.py
class OneDigitNumericValue():
    def __init__(self, name):
        self.name = name

    def __get__(self, obj, type=None) -> object:
        return obj.__dict__.get(self.name) or 0

    def __set__(self, obj, value) -> None:
        obj.__dict__[self.name] = value

class Foo():
    number = OneDigitNumericValue("number")

my_foo_object = Foo()
my_second_foo_object = Foo()

my_foo_object.number = 3
print(my_foo_object.number)
print(my_second_foo_object.number)

my_third_foo_object = Foo()
print(my_third_foo_object.number)

In this example, when you set a value to the number attribute of your object, the descriptor stores it in the __dict__ attribute of the object it’s attached to using the same name of the descriptor itself.

The only problem here is that when you instantiate the descriptor you have to specify the name as a parameter:

number = OneDigitNumericValue("number")

Wouldn’t it be better to just write number = OneDigitNumericValue()? It might, but if you’re running a version of Python less than 3.6, then you’ll need a little bit of magic here with metaclasses and decorators. If you use Python 3.6 or higher, however, then the descriptor protocol has a new method .__set_name__() that does all this magic for you, as proposed in PEP 487:

__set_name__(self, owner, name)

With this new method, whenever you instantiate a descriptor this method is called and the name parameter automatically set.

Now, try to rewrite the former example for Python 3.6 and up:

# descriptors5.py
class OneDigitNumericValue():
    def __set_name__(self, owner, name):
        self.name = name

    def __get__(self, obj, type=None) -> object:
        return obj.__dict__.get(self.name) or 0

    def __set__(self, obj, value) -> None:
        obj.__dict__[self.name] = value

class Foo():
    number = OneDigitNumericValue()

my_foo_object = Foo()
my_second_foo_object = Foo()

my_foo_object.number = 3
print(my_foo_object.number)
print(my_second_foo_object.number)

my_third_foo_object = Foo()
print(my_third_foo_object.number)

Now, .__init__() has been removed and .__set_name__() has been implemented. This makes it possible to create your descriptor without specifying the name of the internal attribute that you need to use for storing the value. Your code also looks nicer and cleaner now!

Run this example one more time to make sure everything works:

$ python descriptors5.py
3
0
0

This example should run with no problems if you use Python 3.6 or higher.

Why Use Python Descriptors?

Now you know what Python descriptors are and how Python itself uses them to power some of its features, like methods and properties. You’ve also seen how to create a Python descriptor while avoiding some common pitfalls. Everything should be clear now, but you may still wonder why you should use them.

In my experience, I’ve known a lot of advanced Python developers that have never used this feature before and that have no need for it. That’s quite normal because there are not many use cases where Python descriptors are necessary. However, that doesn’t mean that Python descriptors are just an academic topic for advanced users. There are still some good use cases that can justify the price of learning how to use them.

Lazy Properties

The first and most straightforward example is lazy properties. These are properties whose initial values are not loaded until they’re accessed for the first time. Then, they load their initial value and keep that value cached for later reuse.

Consider the following example. You have a class DeepThought that contains a method meaning_of_life() that returns a value after a lot of time spent in heavy concentration:

# slow_properties.py
import random
import time

class DeepThought:
    def meaning_of_life(self):
        time.sleep(3)
        return 42

my_deep_thought_instance = DeepThought()
print(my_deep_thought_instance.meaning_of_life())
print(my_deep_thought_instance.meaning_of_life())
print(my_deep_thought_instance.meaning_of_life())

If you run this code and try to access the method three times, then you get an answer every three seconds, which is the length of the sleep time inside the method.

Now, a lazy property can instead evaluate this method just once when it’s first executed. Then, it will cache the resulting value so that, if you need it again, you can get it in no time. You can achieve this with the use of Python descriptors:

# lazy_properties.py
import random
import time

class LazyProperty:
    def __init__(self, function):
        self.function = function
        self.name = function.__name__

    def __get__(self, obj, type=None) -> object:
        obj.__dict__[self.name] = self.function(obj)
        return obj.__dict__[self.name]

class DeepThought:
    @LazyProperty
    def meaning_of_life(self):
        time.sleep(3)
        return 42

my_deep_thought_instance = DeepThought()
print(my_deep_thought_instance.meaning_of_life)
print(my_deep_thought_instance.meaning_of_life)
print(my_deep_thought_instance.meaning_of_life)

Take your time to study this code and understand how it works. Can you see the power of Python descriptors here? In this example, when you use the @LazyProperty descriptor, you’re instantiating a descriptor and passing to it .meaning_of_life(). This descriptor stores both the method and its name as instance variables.

Since it is a non-data descriptor, when you first access the value of the meaning_of_life attribute, .__get__() is automatically called and executes .meaning_of_life() on the my_deep_thought_instance object. The resulting value is stored in the __dict__ attribute of the object itself. When you access the meaning_of_life attribute again, Python will use the lookup chain to find a value for that attribute inside the __dict__ attribute, and that value will be returned immediately.

Note that this works because, in this example, you’ve only used one method .__get__() of the descriptor protocol. You’ve also implemented a non-data descriptor. If you had implemented a data descriptor, then the trick would not have worked. Following the lookup chain, it would have had precedence over the value stored in __dict__. To test this out, run the following code:

# wrong_lazy_properties.py
import random
import time

class LazyProperty:
    def __init__(self, function):
        self.function = function
        self.name = function.__name__

    def __get__(self, obj, type=None) -> object:
        obj.__dict__[self.name] = self.function(obj)
        return obj.__dict__[self.name]

    def __set__(self, obj, value):
        pass

class DeepThought:
    @LazyProperty
    def meaning_of_life(self):
        time.sleep(3)
        return 42

my_deep_tought_instance = DeepThought()
print(my_deep_tought_instance.meaning_of_life)
print(my_deep_tought_instance.meaning_of_life)
print(my_deep_tought_instance.meaning_of_life)

In this example, you can see that just implementing .__set__(), even if it doesn’t do anything at all, creates a data descriptor. Now, the trick of the lazy property stops working.

D.R.Y. Code

Another typical use case for descriptors is to write reusable code and make your code D.R.Y. Python descriptors give developers a great tool to write reusable code that can be shared among different properties or even different classes.

Consider an example where you have five different properties with the same behavior. Each property can be set to a specific value only if it’s an even number. Otherwise, it’s value is set to 0:

# properties.py
class Values:
    def __init__(self):
        self._value1 = 0
        self._value2 = 0
        self._value3 = 0
        self._value4 = 0
        self._value5 = 0

    @property
    def value1(self):
        return self._value1

    @value1.setter
    def value1(self, value):
        self._value1 = value if value % 2 == 0 else 0

    @property
    def value2(self):
        return self._value2

    @value2.setter
    def value2(self, value):
        self._value2 = value if value % 2 == 0 else 0

    @property
    def value3(self):
        return self._value3

    @value3.setter
    def value3(self, value):
        self._value3 = value if value % 2 == 0 else 0

    @property
    def value4(self):
        return self._value4

    @value4.setter
    def value4(self, value):
        self._value4 = value if value % 2 == 0 else 0

    @property
    def value5(self):
        return self._value5

    @value5.setter
    def value5(self, value):
        self._value5 = value if value % 2 == 0 else 0

my_values = Values()
my_values.value1 = 1
my_values.value2 = 4
print(my_values.value1)
print(my_values.value2)

As you can see, you have a lot of duplicated code here. It’s possible to use Python descriptors to share behavior among all the properties. You can create an EvenNumber descriptor and use it for all the properties like this:

# properties2.py
class EvenNumber:
    def __set_name__(self, owner, name):
        self.name = name

    def __get__(self, obj, type=None) -> object:
        return obj.__dict__.get(self.name) or 0

    def __set__(self, obj, value) -> None:
        obj.__dict__[self.name] = (value if value % 2 == 0 else 0)

class Values:
    value1 = EvenNumber()
    value2 = EvenNumber()
    value3 = EvenNumber()
    value4 = EvenNumber()
    value5 = EvenNumber()

my_values = Values()
my_values.value1 = 1
my_values.value2 = 4
print(my_values.value1)
print(my_values.value2)

This code looks a lot better now! The duplicates are gone and the logic is now implemented in a single place so that if you need to change it, you can do so easily.

Conclusion

Now that you know how Python uses descriptors to power some of its great features, you’ll be a more conscious developer who understands why some Python features have been implemented the way they are.

You’ve learned:

  • What Python descriptors are and when to use them
  • Where descriptors are used in Python’s internals
  • How to implement your own descriptors

What’s more, you now know of some specific use cases where Python descriptors are particularly helpful. For example, descriptors are useful when you have a common behavior that has to be shared among a lot of properties, even ones of different classes.

If you have any questions, leave a comment down below or contact me on Twitter! If you want to dive deeper into Python descriptors, then check out the official Python Descriptor HowTo Guide.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]



from Planet Python
via read more

TestDriven.io: Working with Static and Media Files in Django

This article looks at how to work with static and media files in a Django project, locally and in production. from Planet Python via read...